Innovation Hub

AI Threat Landscape Report 2025

The AI Security Playbook

Summary As AI rapidly transforms business operations across industries, it brings unprecedented security vulnerabilities that existing tools simply weren’t designed to address. This article reveals the hidden dangers lurking within AI systems, where attackers leverage runtime vulnerabilities to exploit model weaknesses, and introduces a comprehensive security framework that protects the entire AI lifecycle. Through the […]

Exploiting MCP Tool Parameters

Summary HiddenLayer’s research team has uncovered a concerningly simple way of extracting sensitive data using MCP tools. Inserting specific parameter names into a tool’s function causes the client to provide corresponding sensitive information in its response when that tool is called. This occurs regardless of whether or not the inserted parameter is actually used by […]

Reports and Guides

AI Threat Landscape Report 2025

Download your copy of HiddenLayer's 2025 AI Threat Landscape Report to learn more about evolving AI vulnerabilities and how securing AI can fuel your organization's innovation

Reports & Guides

AI Threat Landscape Report 2025

HiddenLayer Named a Cool Vendor in AI Security

A Step-By-Step Guide for CISOS

SAI Security Advisory

CVE-2024-0129

NVIDIA NeMo Vulnerability Report

An attacker can craft a malicious model containing a path traversal and share it with a victim. If the victim uses an Nvidia NeMo version prior to r2.0.0rc0 and loads the malicious model, arbitrary files may be written to disk. This can result in code execution and data tampering.

CVE-2024-24590

Pickle Load on Artifact Get Leading to Code Execution

An attacker can create a pickle file containing arbitrary code and upload it as an artifact to a Project via the API. When a victim user calls the get method within the Artifact class to download and load a file into memory, the pickle file is deserialized on their system, running any arbitrary code it contains.

CVE-2024-24591

Path Traversal on File Download Leading to Arbitrary Write

An attacker can upload or modify a dataset containing a link pointing to an arbitrary file and a target file path. When a user interacts with this dataset, such as when using the Dataset.squash method, the file is written to the target path on the user’s system.

CVE-2024-24592

Improper Auth Leading to Arbitrary Read-Write Access

An attacker can, due to lack of authentication, arbitrarily upload, delete, modify, or download files on the fileserver, even if the files belong to another user.

CVE-2024-24593

Cross-site Request Forgery in ClearML Server

An attacker can craft a malicious web page that triggers a CSRF when visited. When a user browses to the malicious web page a request is sent which can allow an attacker to fully compromise a user’s account.

HiddenLayer in the News

Security for AI Platform Expansion: Introducing Automated Red Teaming for AI

Austin, TX — November 20, 2024 — HiddenLayer, a leader in security for AI solutions, today announced the launch of its Automated Red Teaming solution for artificial intelligence, a transformative tool that enables security teams to rapidly and thoroughly assess generative AI system vulnerabilities. The addition of this new product extends HiddenLayer’s AISec platform capabilities […]

HiddenLayer Recognized as a Gartner Cool Vendor for AI Security in 2024

Austin, TX – October 30, 2024 – HiddenLayer, a leader in security for AI solutions, is honored to be recognized as a Cool Vendor for AI Security in Gartner’s 2024 report. This prestigious distinction highlights HiddenLayer’s innovative approaches to safeguarding artificial intelligence models, data, and workflows against a rapidly evolving threat landscape. HiddenLayer’s proactive solutions […]

HiddenLayer Announces New Features to Safeguard Enterprise AI Models with Improved Risk Detection

Austin, TX – October 8, 2024 – HiddenLayer today announced the launch of several new features to its AISec Platform and Model Scanner, designed to enhance risk detection, scalability, and operational control for enterprises deploying AI at scale. As the pace of AI adoption accelerates, so do the threats targeting these systems, necessitating security measures […]