Innovation Hub

Featured Posts

Insights
xx
min read

Introducing Workflow-Aligned Modules in the HiddenLayer AI Security Platform

Insights
xx
min read

Inside HiddenLayer’s Research Team: The Experts Securing the Future of AI

Insights
xx
min read

Why Traditional Cybersecurity Won’t “Fix” AI

Get all our Latest Research & Insights

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.

Research

Research
xx
min read

Agentic ShadowLogic

Research
xx
min read

MCP and the Shift to AI Systems

Research
xx
min read

The Lethal Trifecta and How to Defend Against It

Research
xx
min read

EchoGram: The Hidden Vulnerability Undermining AI Guardrails

Videos

Report and Guides

Report and Guide
xx
min read

Securing AI: The Technology Playbook

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

Report and Guide
xx
min read

AI Threat Landscape Report 2025

HiddenLayer AI Security Research Advisory

CVE-2025-62354
XX
min read

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

When in autorun mode with the secure ‘Follow Allowlist’ setting, Cursor checks commands sent to run in the terminal by the agent to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic, allowing an attacker to craft a command that will execute non-whitelisted commands.

SAI-ADV-2025-012
XX
min read

Data Exfiltration from Tool-Assisted Setup

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

CVE-2025-62353
XX
min read

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

CVE-2025-62356
XX
min read

Symlink Bypass in File System MCP Server Leading to Arbitrary Filesystem Read

A symlink bypass vulnerability exists inside of the built-in File System MCP server, allowing any file on the filesystem to be read by the model. The code that validates allowed paths can be found in the file: ai/codium/mcp/ideTools/FileSystem.java, but this validation can be bypassed if a symbolic link exists within the project.

In the News

News
XX
min read
HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative

Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.

News
XX
min read
HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments

As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.

News
XX
min read
HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

On September 30, Databricks officially launched its <a href="https://www.databricks.com/blog/transforming-cybersecurity-data-intelligence?utm_source=linkedin&amp;utm_medium=organic-social">Data Intelligence Platform for Cybersecurity</a>, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.

Insights
xx
min read

Introducing Workflow-Aligned Modules in the HiddenLayer AI Security Platform

Modern AI environments don’t fail because of a single vulnerability. They fail when security can’t keep pace with how AI is actually built, deployed, and operated. That’s why our latest platform update represents more than a UI refresh. It’s a structural evolution of how AI security is delivered.

Insights
xx
min read

Inside HiddenLayer’s Research Team: The Experts Securing the Future of AI

Every new AI model expands what’s possible and what’s vulnerable. Protecting these systems requires more than traditional cybersecurity. It demands expertise in how AI itself can be manipulated, misled, or attacked. Adversarial manipulation, data poisoning, and model theft represent new attack surfaces that traditional cybersecurity isn’t equipped to defend.

Insights
xx
min read

Why Traditional Cybersecurity Won’t “Fix” AI

When an AI system misbehaves, from leaking sensitive data to producing manipulated outputs, the instinct across the industry is to reach for familiar tools: patch the issue, run another red team, test more edge cases.

Insights
xx
min read

Securing AI Through Patented Innovation

As AI systems power critical decisions and customer experiences, the risks they introduce must be addressed. From prompt injection attacks to adversarial manipulation and supply chain threats, AI applications face vulnerabilities that traditional cybersecurity can’t defend against. HiddenLayer was built to solve this problem, and today, we hold one of the world’s strongest intellectual property portfolios in AI security.

Insights
xx
min read

AI Discovery in Development Environments

AI is reshaping how organizations build and deliver software. From customer-facing applications to internal agents that automate workflows, AI is being woven into the code we develop and deploy in the cloud. But as the pace of adoption accelerates, most organizations lack visibility into what exactly is inside the AI systems they are building.

Insights
xx
min read

Integrating AI Security into the SDLC

AI and ML systems are expanding the software attack surface in new and evolving ways, through model theft, adversarial evasion, prompt injection, data poisoning, and unsafe model artifacts. These risks can’t be fully addressed by traditional application security alone. They require AI-specific defenses integrated directly into the Software Development Lifecycle (SDLC).

Insights
xx
min read

Top 5 AI Threat Vectors in 2025

AI is powering the next generation of innovation. Whether driving automation, enhancing customer experiences, or enabling real-time decision-making, it has become inseparable from core business operations. However, as the value of AI systems grows, so does the incentive to exploit them.

Insights
xx
min read

LLM Security 101: Guardrails, Alignment, and the Hidden Risks of GenAI

AI systems are used to create significant benefits in a wide variety of business processes, such as customs and border patrol inspections, improving airline maintenance, and for medical diagnostics to enhance patient care. Unfortunately, threat actors are targeting the AI systems we rely on to enhance customer experience, increase revenue, or improve manufacturing margins. By manipulating prompts, attackers can trick large language models (LLMs) into sharing dangerous information,&nbsp; leaking sensitive data, or even providing the wrong information, which could have even greater impact given how AI is being deployed in critical functions. From public-facing bots to internal AI agents, the risks are real and evolving fast.

Insights
xx
min read

AI Coding Assistants at Risk

From autocomplete to full-blown code generation, AI-powered development tools like Cursor are transforming the way software is built. They’re fast, intuitive, and trusted by some of the world’s most recognized brands, such as Samsung, Shopify, monday.com, US Foods, and more.

Insights
xx
min read

OpenSSF Model Signing for Safer AI Supply Chains

The future of artificial intelligence depends not just on powerful models but also on our ability to trust them. As AI models become the backbone of countless applications, from healthcare diagnostics to financial systems, their integrity and security have never been more important. Yet the current AI ecosystem faces a fundamental challenge: How does one prove that the model to be deployed is exactly what the creator intended? Without layered verification mechanisms, organizations risk deploying compromised, tampered, or maliciously modified models, which could lead to potentially catastrophic consequences.

Insights
xx
min read

Structuring Transparency for Agentic AI

As generative AI evolves into more autonomous, agent-driven systems, the way we document and govern these models must evolve too. Traditional methods of model documentation, built for static, prompt-based models, are no longer sufficient. The industry is entering a new era where transparency isn't optional, it's structural.

Insights
xx
min read

Built-In AI Model Governance

A large financial institution is preparing to deploy a new fraud detection model. However, progress has stalled.

research
xx
min read

Agentic ShadowLogic

research
xx
min read

MCP and the Shift to AI Systems

research
xx
min read

The Lethal Trifecta and How to Defend Against It

research
xx
min read

EchoGram: The Hidden Vulnerability Undermining AI Guardrails

research
xx
min read

Same Model, Different Hat

research
xx
min read

The Expanding AI Cyber Risk Landscape

research
xx
min read

The First AI-Powered Cyber Attack

research
xx
min read

Prompts Gone Viral: Practical Code Assistant AI Viruses

research
xx
min read

Persistent Backdoors

research
xx
min read

Visual Input based Steering for Output Redirection (VISOR)

research
xx
min read

How Hidden Prompt Injections Can Hijack AI Code Assistants Like Cursor

research
xx
min read

Introducing a Taxonomy of Adversarial Prompt Engineering

Report and Guide
xx
min read

Securing AI: The Technology Playbook

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

Report and Guide
xx
min read

AI Threat Landscape Report 2025

Report and Guide
xx
min read

HiddenLayer Named a Cool Vendor in AI Security

Report and Guide
xx
min read

A Step-By-Step Guide for CISOS

Report and Guide
xx
min read

AI Threat landscape Report 2024

Report and Guide
xx
min read

HiddenLayer and Intel eBook

Report and Guide
xx
min read

Forrester Opportunity Snapshot

news
xx
min read

IBM Continues AI Push With $500M Enterprise AI Venture Fund

news
xx
min read

IBM Launches $500 Million Enterprise AI Venture Fund

news
xx
min read

HiddenLayer Awarded Phase 2 SBIR Contract by the U.S. Department of Defense

news
xx
min read

HiddenLayer Appoints Malcolm Harkins as Chief Security and Trust Officer

news
xx
min read

Secretary Blinken says U.S. needs to connect to tech ecosystems like Austin

news
xx
min read

HiddenLayer Raises $50M in Series A Funding to Safeguard AI

news
xx
min read

HiddenLayer Wins 2023 SC Award for Most Promising Early-Stage Start Up

news
xx
min read

EU becomes AI regulation pioneer. Senate Judiciary Committee considers reauthorization of Section 702.

news
xx
min read

2023 SC Awards Finalists: Most Promising Early-Stage Start Up

news
xx
min read

Insights From The 2023 RSA Conference: Generative AI, Quantum, And Innovation Sandbox

news
xx
min read

RSAC 2023 Spotlight: AI, Innovation Sandbox, Top New Attack Techniques and More

news
xx
min read

Innovation Sandbox: Cybersecurity Investors Pivot to Safeguarding AI Training Models

SAI Security Advisory

Pickle Load on Recipe Run Leading to Code Execution

A deserialization vulnerability exists within the recipes/cards/__init__.py file within the class BaseCard, in the static method load. An attacker can create an MLProject Recipe containing a malicious pickle file (e.g. pickle.pkl) and a python script that calls BaseCard.load(pickle.pkl). The pickle file will be deserialized when the project is run leading to execution of the arbitrary code on the victim machine.

SAI Security Advisory

Cloudpickle Load on PyTorch Model Load Leading to Code Execution

A deserialization vulnerability exists within the mlflow/pytorch/__init__.py file, within the function _load_model. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.

SAI Security Advisory

Cloudpickle Load on Langchain AgentExecutor Model Load Leading to Code Execution

A deserialization vulnerability exists within the mlflow/langchain/utils.py file, within the function _load_from_pickle. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.

SAI Security Advisory

Cloudpickle Load on TensorFlow Keras Model Leading to Code Execution

A deserialization vulnerability exists within the mlflow/tensorflow/__init__.py file, within the function _load_custom_objects. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.

SAI Security Advisory

Cloudpickle Load on LightGBM SciKit Learn Model Leading to Code Execution

A deserialization vulnerability exists within the mlflow/lightgbm/__init__.py file, within the function _load_model. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.

SAI Security Advisory

Pickle Load on Pmdarima Model Load Leading to Code Execution

A deserialization vulnerability exists within the pmdarima/__init__.py file, within the function _load_model. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.

SAI Security Advisory

Cloudpickle Load on PyFunc Model Load Leading to Code Execution

A deserialization vulnerability exists within the mlflow/pyfunc/model.py file, within the function _load_pyfunc. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.

SAI Security Advisory

Cloudpickle and Pickle Load on Sklearn Model Load Leading to Code Execution

A deserialization vulnerability exists in the sklearn/__init__.py file, within the function _load_model_from_local_file. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.

SAI Security Advisory

Pickle Load in Read Pandas Utility Function

The YData profiling library allows users to load pandas datasets from their filesystem using the read_pandas function. This function then grabs the extension of the file and sends it to a loading function based on the extension. One of the supported extensions, and file formats, is the python pickle module. As a result, when a user loads the dataset, arbitrary code will run on their system.

SAI Security Advisory

XSS Injection in HTML Profile Report Generation

ProfileReports can be saved as an HTML file so that they can be viewed directly in the browser. To do this, the program leverages Jinja2 to create templates. However, by default, Jinja2 doesn’t auto-escape any HTML that is rendered resulting in an attacker being able to inject an XSS attack, running arbitrary code when a report is viewed.

SAI Security Advisory

Pickle Load in Serialized Profile Load

Profile reports can be serialized and deserialized through the load/loads and dump/dumps functions allowing people to share reports with each other. Reports are serialized using the Python pickle module which is inherently insecure and can lead to arbitrary code being executed once the file is loaded.

SAI Security Advisory

Model Deserialization Leads to Code Execution

When loading nodes of type OperatorFuncNode Skops allows a model to call functions from within the operator module, specifying both the function and the arguments being passed to it. This system allows an attacker to craft a specialized payload in the form of a model that allows for arbitrary code execution to occur when a malicious model is loaded and compiled.

Stay Ahead of AI Security Risks

Get research-driven insights, emerging threat analysis, and practical guidance on securing AI systems—delivered to your inbox.

By submitting this form, you agree to HiddenLayer's Terms of Use and acknowledge our Privacy Statement.

Thanks for your message!

We will reach back to you as soon as possible.

Oops! Something went wrong while submitting the form.