HiddenLayer, a Gartner recognized Cool Vendor for AI Security, is the leading provider of Security for AI. Its security platform helps enterprises safeguard the machine learning models behind their most important products. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded by a team with deep roots in security and ML, HiddenLayer aims to protect enterprise’s AI from inference, bypass, extraction attacks, and model theft. The company is backed by a group of strategic investors, including M12, Microsoft’s Venture Fund, Moore Strategic Ventures, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.
Security Advisory
HiddenLayer’s Synaptic Adversarial Intelligence (SAI) team consists of multidisciplinary cybersecurity experts and data scientists dedicated to raising awareness about threats to machine learning and artificial intelligence systems. Our mission is to educate data scientists, MLDevOps teams, and cybersecurity practitioners on evaluating ML/AI vulnerabilities and risks, promoting more security-conscious implementations and deployments.
During our research, we identify numerous vulnerabilities within ML/AI projects. While our research blogs cover those that we consider to be most impactful, some affect only specific projects or use cases. We’ve therefore created this dedicated space to share all of our findings, enabling users within our community to keep updated on new vulnerabilities, including security issues that have not been assigned a CVE.
June 2025
-
BackendAI
Interactive sessions do not verify whether a user is authorized and doesn’t have authentication. These missing verifications allow attackers to take over the sessions and access the data (models, code, etc.), alter the data or results, and stop the user from accessing their session.
-
BackendAI
BackendAI doesn’t enable account creation. However, an exposed endpoint allows anyone to sign up with a user-privileged account.
-
BackendAI
By default, BackendAI’s agent will write to /home/config/ when starting an interactive session. These files are readable by the default user. However, they contain sensitive information such as the user’s mail, access key, and session settings.
April 2025
-
SAI-ADV-2025-009 Unsafe Deserialization in DeepSpeed utility function when loading the model file April 3, 2025
If a user attempts to convert distributed checkpoints into a single consolidated file using DeepSpeed, a pytorch file with the naming convention *_optim_states.pt is used. This pytorch file returns a state which specifies the model state file, also located in the directory. This can contain a maliciously crafted data.pkl file, which, when deserialized as part of this process, may lead to arbitrary code being executed on the system.
-
SAI-ADV-2025-008 Unsafe Deserialization in DeepSpeed utility function when loading the zero stage April 3, 2025
If a user attempts to convert distributed checkpoints into a single consolidated file using DeepSpeed, a pytorch file with the naming convention *_optim_states.pt is used when loading the zero stage. This can contain a maliciously crafted data.pkl file, which, when deserialized as part of this process, may lead to arbitrary code being executed on the system.
-
SAI-ADV-2025-007 Unsafe Deserialization in Serializers PickleSerializer leads to code execution when deserializing data April 3, 2025
If a user attempts to load a maliciously crafted pickle file using a class such as BinaryReader, this will result in the pickle file being deserialized, potentially leading to arbitrary code being executed on the system.
-
PyTorch Lightning
If a user attempts to load a distributed checkpoint file from a directory, and the meta.pt file contains a maliciously generated pytorch file, this will result in the pickle file being deserialized, potentially leading to arbitrary code being executed on the system.
-
PyTorch Lightning
If a user attempts to load a distributed checkpoint file from a directory, and the .metadata file contains a maliciously crafted pickle file, this will result in the pickle file being deserialized, potentially leading to arbitrary code being executed on the system.
-
SAI-ADV-2025-004 Unsafe Deserialization in Load _lazy_load leads to code execution when loading a model April 3, 2025
If a user attempts to load a checkpoint file containing a maliciously crafted data.pkl file using classes such as FSDP, this will lazily load a checkpoint, loading what is specified by the data.pkl file rather than pulling in all weights at once. This will result in the pickle file being deserialized, potentially leading to arbitrary code being executed on the system.
-
SAI-ADV-2025-003 Unsafe Deserialization in Cloud_IO _load leads to code execution when loading from a bytes like object April 3, 2025
If a user attempts to load a checkpoint file containing a maliciously crafted data.pkl file using classes such as LightningModule or LightningDataModule from a bytes like object, this will result in the pickle file being deserialized, potentially leading to arbitrary code being executed on the system.
-
PyTorch Lightning
If a user attempts to load a checkpoint file that contains a maliciously crafted data.pkl file using classes such as LightningModule or LightningDataModule, and that file was downloaded via an HTTP or HTTPS connection, this will result in the pickle file being deserialized, potentially leading to arbitrary code being executed on the system.
-
SAI-ADV-2025-001 Unsafe Deserialization in Cloud_IO _load leads to code execution when loading from a local file April 3, 2025
If a user attempts to load a local checkpoint file containing a maliciously crafted data.pkl file using classes such as LightningModule or LightningDataModule, this will result in the pickle file being deserialized, potentially leading to arbitrary code being executed on the system.
December 2024
-
SAI-ADV-2024-004 keras.models.load_model when scanning .h5 files leads to arbitrary code execution December 16, 2024
If a user scans a malicious keras model in the H5 format with Bosch AI Shield’s Watchtower vulnerability scanning tool, the arbitrary code inside of the Keras model will run, executing arbitrary code.
-
SAI-ADV-2024-005 keras.models.load_model when scanning .pb files leads to arbitrary code execution December 16, 2024
If a user scans a malicious keras model in the protobuf format with Bosch AI Shield’s Watchtower vulnerability scanning tool, the arbitrary code inside of the Keras model will run, executing arbitrary code.
October 2024
-
NVIDIA NeMo
An attacker can craft a malicious model containing a path traversal and share it with a victim. If the victim uses an Nvidia NeMo version prior to r2.0.0rc0 and loads the malicious model, arbitrary files may be written to disk. This can result in code execution and data tampering.
September 2024
-
CVE-2024-45858 Eval on XML parameters allows arbitrary code execution when loading RAIL file September 18, 2024
An attacker can craft an XML file with Python code contained within a ‘validators’ attribute. This code must be wrapped in braces to work, i.e. `{Python_code}`. This can then be passed to a victim user as a Guardrails file, and upon loading it, the Python code contained within the braces is passed into an eval function, which will execute the Python code contained within.
-
CVE-2024-27321 Eval on CSV data allows arbitrary code execution in the MLCTaskValidate class September 12, 2024
An attacker can craft a CSV file containing Python code in one of the values. This code must be wrapped in brackets to work i.e. [
]. The maliciously crafted CSV file can then be shared with a victim user as a dataset. When the user creates a multilabel classification task, the CSV is loaded and passed through a validation function, where values wrapped in brackets are passed into an eval function, which will execute the Python code contained within. -
CVE-2024-27320 Eval on CSV data allows arbitrary code execution in the ClassificationTaskValidate class September 12, 2024
An attacker can craft a CSV file containing Python code in one of the values. This code must be wrapped in brackets to work i.e. [
]. The maliciously crafted CSV file can then be shared with a victim user as a dataset. When the user creates a classification task, the CSV is loaded and passed through a validation function, where values wrapped in brackets are passed into an eval function, which will execute the Python code contained within. -
CVE-2024-45857 Unsafe deserialization in Datalab leads to arbitrary code execution September 12, 2024
An attacker can place a malicious file called datalabs.pkl within a directory and send that directory to a victim user. When the victim user loads the directory with Datalabs.load, the datalabs.pkl within it is deserialized and any arbitrary code contained within it is executed.
-
CVE-2024-45846 Eval on query parameters allows arbitrary code execution in Weaviate integration September 12, 2024
An attacker authenticated to a MindsDB instance with the Weaviate integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the Weaviate engine and running a ‘SELECT WHERE’ clause against it, containing the code to execute. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input, but it will run any arbitrary Python code contained within the value given in the ‘WHERE embeddings =’ part of the clause.
-
CVE-2024-45847 Eval on query parameters allows arbitrary code execution in Vector Database integrations September 12, 2024
An attacker authenticated to a MindsDB instance with any one of several integrations installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the specified integration engine and running an ‘UPDATE’ query against it, containing the code to execute. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run any arbitrary Python code contained within the value given in the ‘SET embeddings =’ part of the query.
-
CVE-2024-45848 Eval on query parameters allows arbitrary code execution in ChromaDB integration September 12, 2024
An attacker authenticated to a MindsDB instance with the ChromaDB integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the ChromaDB engine and running an ‘INSERT’ query against it, where the value given for ‘metadata’ would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.
-
CVE-2024-45849 Eval on query parameters allows arbitrary code execution in SharePoint integration list creation September 12, 2024
An attacker authenticated to a MindsDB instance with the SharePoint integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the SharePoint engine and running an ‘INSERT’ query against it to create a list, where the value given for the ‘list’ parameter would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.
-
CVE-2024-45850 Eval on query parameters allows arbitrary code execution in SharePoint integration site column creation September 12, 2024
An attacker authenticated to a MindsDB instance with the SharePoint integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the SharePoint engine and running an ‘INSERT’ query against it to create a site column, where the value given for the ‘text’ parameter would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.
-
CVE-2024-45851 Eval on query parameters allows arbitrary code execution in SharePoint integration list item creation September 12, 2024
An attacker authenticated to a MindsDB instance with the SharePoint integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the SharePoint engine and running an ‘INSERT’ query against it to create a list item, where the value given for the ‘fields’ parameter would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.
-
MindsDB
An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via a ‘predict’ or ‘describe’ query, executing the arbitrary code on the server.
-
CVE-2024-45853 Pickle Load on inhouse BYOM model prediction leads to arbitrary code execution September 12, 2024
An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the ‘inhouse’ Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via the ‘predict’ method, executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse’.
-
CVE-2024-45854 Pickle Load on inhouse BYOM model describe query leads to arbitrary code execution September 12, 2024
An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the ‘inhouse’ Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via the ‘describe’ method, executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse.’
-
CVE-2024-45855 Pickle Load on inhouse BYOM model finetune leads to arbitrary code execution September 12, 2024
An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the ‘inhouse’ Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via the ‘finetune’ method, executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse.’
-
MindsDB
An attacker authenticated to a MindsDB instance can create an ML Engine, database, project, or upload a dataset within the UI and give it a name (or value in the dataset) containing javascript code that will render when the items are enumerated within the UI.
August 2024
-
SAI-ADV-2024-002 Exec on untrusted LLM output leading to arbitrary code execution on Evaporate integration August 30, 2024
An attacker can create a website with a prompt injection. Once a victim scrapes the website using the Evaporate integration, malicious code will execute on their system.
-
SAI-ADV-2024-003 Exec on untrusted LLM output leading to arbitrary code execution on Evaporate integration August 12, 2024
The safe_eval and safe_exec functions are intended to allow the user to run untrusted code in an eval or exec function while disallowing dangerous functions. However, an attacker can use 3rd party library functions to get arbitrary code execution.
July 2024
-
Wyze Cam V4
A command injection vulnerability exists in Wyze Cam V4 firmware versions up to and including 4.52.4.9887. An attacker within Bluetooth range of the camera can leverage this command to execute arbitrary commands as root during the camera setup process.
-
Tensorflow Probability
An attacker can create a maliciously crafted HDF5 file by injecting a pickle object containing arbitrary code into the DistributionLambda layer of the model under the make_distribution_fn key, and share it with a victim. If the victim is using Tensorflow Probability v0.7 or later and loads the malicious model, the object will be deserialized and arbitrary code will execute on their system.
June 2024
-
Skops
An attacker can create a malicious crafted model containing an OperatorFuncNode, and share it with a victim. If the victim is using Python 3.11 or later and loads the malicious model arbitrary code will execute on their system.
-
YData-profiling
An attacker can create a maliciously crafted pandas dataset and share it with a victim. Once a victim loads the dataset in Ydata-profiling, malicious code will execute on their system.
-
MLflow
An attacker can package an MLflow Project where the main entrypoint command set in the MLproject file contains malicious code (or an operating system appropriate command), and share it with a victim. When the victim runs the project, the command will be executed on their system.
-
MLflow
An attacker can create an MLProject Recipe containing a malicious pickle file and a Python file that calls BaseCard.load on it and share it with a victim. When the victim runs mlflow run against the Recipe directory, the pickle file will be deserialized on their system, running any arbitrary code it contains.
-
MLflow
An attacker can inject a malicious pickle object into a Pytorch model file and log it to the MLflow tracking server via the API using the model.pytorch.log_model function. When a victim user calls the mlflow.pytorch.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.
-
MLflow
An attacker can inject a malicious pickle object during the process of creating a Langhchain model and log the model to the MLflow tracking server via the API using the model.langchain.log_model function. When a victim user calls the mlflow.langchain.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.
-
MLflow
An attacker can inject a malicious pickle object into a Tensorflow model file and log it to the MLflow tracking server via the API using the model.tensorflow.log_model function. When a victim user calls the mlflow.tensorflow.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.
-
CVE-2024-37056 Cloudpickle Load on LightGBM SciKit Learn Model Leading to Code Execution June 4, 2024
An attacker can inject a malicious pickle object into a LightGBM scikit-learn model file and log it to the MLflow tracking server via the API using the model.lightgbm.log_model function. When a victim user calls the mlflow.lightgbm.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.
-
MLflow
An attacker can inject a malicious pickle object into a pmdarima model file and log it to the MLflow tracking server via the API using the model.pmdarima.log_model function. When a victim user calls the mlflow.pmdarima.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.
-
MLflow
An attacker can inject a malicious pickle object into a model file and log it to the MLflow tracking server via the API using the model.pyfunc.log_model function. When a victim user calls the mlflow.pyfunc.load_model function on the model, the pickle object is deserialized on their system, running any arbitrary code it contains.
-
MLflow
An attacker can inject a malicious pickle object into a scikit-learn model file and log it to the MLflow tracking server via the API. When a victim user calls the mlflow.sklearn.load_model function on the model, the pickle file is deserialized on their system, running any arbitrary code it contains.
-
MLflow
An attacker can inject a malicious pickle object into a scikit-learn model file and log it to the MLflow tracking server via the API. When a victim user calls the mlflow.sklearn.load_model function on the model, the pickle file is deserialized on their system, running any arbitrary code it contains.
-
YData-profiling
An attacker can create a maliciously crafted Ydata-profiling html report containing malicious code. Once a victim browses to the report and views it, malicious code will execute in their browser.
-
YData-profiling
An attacker can create a maliciously crafted Ydata-profiling report containing malicious code and share it with a victim. When the victim loads the report, the code will be executed on their system.
April 2024
-
AWS Sagemaker
A command injection vulnerability exists inside the capture_dependencies function of the AWS Sagemakers util file. If a user used the util function when creating their code, an attacker can leverage the vulnerability to run arbitrary commands on a system running the code by injecting a system command into the string passed to the function.
-
CVE-2024-34072 Numpy defaults to allowing Pickle to be run when content type is NPY or NPZ April 30, 2024
An attacker can inject a malicious pickle object into a numpy file and share it with a victim user. When the victim uses the NumpyDeserializer.deserialize function of the base_deserializers python file to load it, the allow_pickle optional argument can be set to ‘false’ and passed to np.load, leading to the safe loading of the file. However, by default the optional parameter was set to true, so if this is not specifically changed by the victim, this will result in the loading and execution of the malicious pickle object.
-
R
An attacker could leverage the R Data Serialization format to insert arbitrary code into an RDS formatted file, or an R package as an RDX or RDB component, which will be executed when referenced or called with ReadRDS. This is because of the lazy evaluation process used in the unserialize function of the R programming language.
February 2024
-
ONNX
An attacker can create a malicious onnx model which fails an assert statement in a way that an error string equal to or greater than 2048 characters is printed out and share it with a victim. When the victim tries to load the onnx model a string is created which leaks program memory.
-
CVE-2024-27318 Path sanitization bypass leading to arbitrary read February 23, 2024
An attacker can create a malicious onnx model containing paths to externally located tensors and share it with a victim. When the victim tries to load the externally located tensors a directory traversal attack can occur leading to an arbitrary read on a victim’s system leading to information disclosure.
-
ClearML
An attacker can upload or modify a dataset containing a link pointing to an arbitrary file and a target file path. When a user interacts with this dataset, such as when using the Dataset.squash method, the file is written to the target path on the user’s system.
-
CVE-2024-24595 Credentials Stored in Plaintext in MongoDB Instance February 1, 2024
An attacker could retrieve ClearML user information and credentials using a tool such as mongosh if they have access to the server. This is because the open-source version of the ClearML Server MongoDB instance lacks access control and stores user information and credentials in plaintext.
-
CVE-2024-24594 Web Server Renders User HTML Leading to XSS February 1, 2024
An attacker can provide a URL rather than uploading an image to the Debug Samples tab of an Experiment. If the URL has the extension .html, the web server retrieves the HTML page, which is assumed to contain trusted data. The HTML is marked as safe and rendered on the page, resulting in arbitrary JavaScript running in any user’s browser when they view the samples tab.
-
CVE-2024-24593 Cross-site Request Forgery in ClearML Server February 1, 2024
An attacker can craft a malicious web page that triggers a CSRF when visited. When a user browses to the malicious web page a request is sent which can allow an attacker to fully compromise a user’s account.
-
CVE-2024-24592 Improper Auth Leading to Arbitrary Read-Write Access February 1, 2024
An attacker can, due to lack of authentication, arbitrarily upload, delete, modify, or download files on the fileserver, even if the files belong to another user.
-
CVE-2024-24590 Pickle Load on Artifact Get Leading to Code Execution February 1, 2024
An attacker can create a pickle file containing arbitrary code and upload it as an artifact to a Project via the API. When a victim user calls the get method within the Artifact class to download and load a file into memory, the pickle file is deserialized on their system, running any arbitrary code it contains.