research

Machine Learning is the New Launchpad for Ransomware

By

Alex Avendano

December 6, 2022

XX

min read

Table of Contents

Share:

Researchers at HiddenLayer’s SAI Team have developed a proof-of-concept attack for surreptitiously deploying malware, such as ransomware or Cobalt Strike Beacon, via machine learning models. The attack uses a technique currently undetected by many cybersecurity vendors and can serve as a launchpad for lateral movement, deployment of additional malware, or the theft of highly sensitive data. Read more in our latest blog, Weaponizing Machine Learning Models with Ransomware.

Attack Surface

According to CompTIA, over 86% of surveyed CEOs reported that machine learning was a mainstream technology within their companies as of 2021. Open-source model-sharing repositories have been born out of inherent data science complexity, practitioner shortage, and the limitless potential and value they provide to organizations – dramatically reducing the time and effort required for ML/AI adoption. However, such repositories often lack comprehensive security controls, which ultimately passes the risk on to the end user - and attackers are counting on it. It is commonplace within data science to download and repurpose pre-trained machine learning models from online model repositories such as HuggingFace or TensorFlow Hub, amongst many others of a far less reputable and security conscientious nature. The general scarcity of security around ML models, coupled with the increasingly sensitive data that ML models are exposed to, means that model hijacking attacks, including AI data poisoning, can evade traditional security solutions and have a high propensity for damage.

Business Implication

The implications of loading a hijacked model can be severe, especially given the sensitive data an ML model is often privy to, specifically:

  • Initial compromise of an environment and lateral movement
  • Deployment of malware (such as ransomware, spyware, backdoors, etc.)
  • Supply chain attacks
  • Theft of Intellectual Property
  • Leaking of Personally Identifiable Information
  • Denial/Degradation of service
  • Reputational harm

How Does This Attack Work?

By combining several attack techniques, including steganography for hiding malicious payloads and data de-serialization flaws that can be leveraged to execute arbitrary code, our researchers demonstrate how to attack a popular computer vision model and embed malware within. The resulting weaponized model evades current detection from anti-virus and EDR solutions while suffering only a very insignificant loss in efficacy. Currently, most popular anti-malware solutions provide little or no support in scanning for ML-based threats.

The researchers focused on the PyTorch framework and considered how the attack could be broadened to target other popular ML libraries, such as TensorFlow, scikit-learn, and Keras. In the demonstration, a 64-bit sample of the infamous Quantum ransomware is deployed on a Windows 10 system. However, any bespoke payload can be distributed in this way and tailored to target different operating systems, such as Windows, Linux, and Mac, and other architectures, such as x86/64.;

Hidden Ransomware Executed from an ML Model

Mitigations & Recommendations

  • Proactive Threat Discovery: Don’t wait until it’s too late. Pre-trained models should be investigated ahead of deployment for evidence of tampering, hijacking, or abuse. HiddenLayer provides a Model Scanning service that can help with identifying malicious tampering. In this blog, we also share a specialized YARA rule for finding evidence of executable code stored within models serialized to the pickle format (a common machine learning file type).
  • Securely Evaluate Model Behaviour: At the end of the day, models are software: if you don’t know where it came from, don’t run it within your enterprise environment. Untrusted pre-trained models should be carefully inspected inside a secure virtual machine prior to being considered for deployment.;
  • Cryptographic Hashing & Model Signing: Not just for integrity, cryptographic hashing provides verification that your models have not been tampered with. If you want to take this a step further, signing your models with certificates ensures a particular level of trust which can be verified by users downstream.
  • External Security Assessment: Understand your level of risk, address blindspots and see what you could improve upon. With the level of sensitive data that ML models are privy to, an external security assessment of your ML pipeline may be worth your time. HiddenLayer’s SAI Team and Professional Services can help your organization evaluate the risk and security of your AI assets

About HiddenLayer

HiddenLayer helps enterprises safeguard the machine learning models behind their most important products with a comprehensive security platform. Only HiddenLayer offers turnkey AI/ML security that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded in March of 2022 by experienced security and ML professionals, HiddenLayer is based in Austin, Texas, and is backed by cybersecurity investment specialist firm Ten Eleven Ventures. For more information, visit www.hiddenlayer.com and follow us on LinkedIn or Twitter.

Related Research

Research
xx
min read

Exploring the Security Risks of AI Assistants like OpenClaw

OpenClaw (formerly Moltbot and ClawdBot) is a viral, open-source autonomous AI assistant designed to execute complex digital tasks, such as managing calendars, automating web browsing, and running system commands, directly from a user's local hardware. Released in late 2025 by developer Peter Steinberger, it rapidly gained over 100,000 GitHub stars, becoming one of the fastest-growing open-source projects in history. While it offers powerful "24/7 personal assistant" capabilities through integrations with platforms like WhatsApp and Telegram, it has faced significant scrutiny for security vulnerabilities, including exposed user dashboards and a susceptibility to prompt injection attacks that can lead to arbitrary code execution, credential theft and data exfiltration, account hijacking, persistent backdoors via local memory, and system sabotage.

Research
xx
min read

Agentic ShadowLogic

Agentic ShadowLogic is a sophisticated graph-level backdoor that hijacks an AI model's tool-calling mechanism to perform silent man-in-the-middle attacks, allowing attackers to intercept, log, and manipulate sensitive API requests and data transfers while maintaining a perfectly normal conversational appearance for the user.

Research
xx
min read

MCP and the Shift to AI Systems

HiddenLayer’s AI Runtime Security addresses the critical "visibility gap" in the Model Context Protocol (MCP) by monitoring the behavior of autonomous agents in real-time to detect and block unauthorized tool use or data exfiltration.

Stay Ahead of AI Security Risks

Get research-driven insights, emerging threat analysis, and practical guidance on securing AI systems—delivered to your inbox.