Innovation Hub

Featured Posts

Insights
xx
min read

Introducing Workflow-Aligned Modules in the HiddenLayer AI Security Platform

Insights
xx
min read

Inside HiddenLayer’s Research Team: The Experts Securing the Future of AI

Insights
xx
min read

Why Traditional Cybersecurity Won’t “Fix” AI

Get all our Latest Research & Insights

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.

Research

Research
xx
min read

Agentic ShadowLogic

Research
xx
min read

MCP and the Shift to AI Systems

Research
xx
min read

The Lethal Trifecta and How to Defend Against It

Research
xx
min read

EchoGram: The Hidden Vulnerability Undermining AI Guardrails

Videos

Report and Guides

Report and Guide
xx
min read

Securing AI: The Technology Playbook

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

Report and Guide
xx
min read

AI Threat Landscape Report 2025

HiddenLayer AI Security Research Advisory

CVE-2025-62354
XX
min read

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

When in autorun mode with the secure ‘Follow Allowlist’ setting, Cursor checks commands sent to run in the terminal by the agent to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic, allowing an attacker to craft a command that will execute non-whitelisted commands.

SAI-ADV-2025-012
XX
min read

Data Exfiltration from Tool-Assisted Setup

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

CVE-2025-62353
XX
min read

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

CVE-2025-62356
XX
min read

Symlink Bypass in File System MCP Server Leading to Arbitrary Filesystem Read

A symlink bypass vulnerability exists inside of the built-in File System MCP server, allowing any file on the filesystem to be read by the model. The code that validates allowed paths can be found in the file: ai/codium/mcp/ideTools/FileSystem.java, but this validation can be bypassed if a symbolic link exists within the project.

In the News

News
XX
min read
HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative

Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.

News
XX
min read
HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments

As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.

News
XX
min read
HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

On September 30, Databricks officially launched its <a href="https://www.databricks.com/blog/transforming-cybersecurity-data-intelligence?utm_source=linkedin&amp;utm_medium=organic-social">Data Intelligence Platform for Cybersecurity</a>, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.

Insights
xx
min read

AI Security: 2025 Predictions Recommendations

It’s time to dust off the crystal ball once again! Over the past year, AI has truly been at the forefront of cyber security, with increased scrutiny from attackers, defenders, developers, and academia. As various forms of generative AI drive mass AI adoption, we find that the threats are not lagging far behind, with LLMs, RAGs, Agentic AI, integrations, and plugins being a hot topic for researchers and miscreants alike.

Insights
xx
min read

Securely Introducing Open Source Models into Your Organization

Open source models are powerful tools for data scientists, but they also come with risks. If your team downloads models from sources like Hugging Face without security checks, you could introduce security threats into your organization. You can eliminate this risk by introducing a process that scans models for vulnerabilities before they enter your organization and are utilized by data scientists. You can ensure that only safe models are used by leveraging HiddenLayer's Model Scanner combined with your CI/CD platform. In this blog, we'll walk you through how to set up a system where data scientists request models, security checks run automatically, and approved models are stored in a safe location like cloud storage, a model registry, or Databricks Unity Catalog.

Insights
xx
min read

Enhancing AI Security with HiddenLayer’s Refusal Detection

Security risks in AI applications are not one-size-fits-all. A system processing sensitive customer data presents vastly different security challenges compared to one that aggregates internet data for market analysis. To effectively safeguard an AI application, developers and security professionals must implement comprehensive mechanisms that instruct models to decline contextually malicious requests—such as revealing personally identifiable information (PII) or ingesting data from untrusted sources. Monitoring these refusals provides an early and high-accuracy warning system for potential malicious behavior.

Insights
xx
min read

Why Revoking Biden’s AI Executive Order Won’t Change Course for CISOs

On 20 January 2025, President Donald Trump rescinded former President Joe Biden’s 2023 executive order on artificial intelligence (AI), which had established comprehensive guidelines for developing and deploying AI technologies. While this action signals a shift in federal policy, its immediate impact on the AI landscape is minimal for several reasons.

Insights
xx
min read

HiddenLayer Achieves ISO 27001 and Renews SOC 2 Type 2 Compliance

Security compliance is more than just a checkbox - it’s a fundamental requirement for protecting sensitive data, building customer trust, and ensuring long-term business growth. At HiddenLayer, security has always been at the core of our mission, and we’re proud to announce that we have achieved SOC 2 Type 2 and ISO 27001 compliance. These certifications reinforce our commitment to providing our customers with the highest level of security and reliability.

Insights
xx
min read

AI Risk Management: Effective Strategies and Framework

Artificial Intelligence (AI) is no longer just a buzzword—it’s a cornerstone of innovation across industries. However, with great potential comes significant risk. Effective AI Risk Management is critical to harnessing AI’s benefits while minimizing vulnerabilities. From data breaches to adversarial attacks, understanding and mitigating risks ensures that AI systems remain trustworthy, secure, and aligned with organizational goals.

Insights
xx
min read

Security for AI vs. AI Security

When we talk about securing AI, it’s important to distinguish between two concepts that are often conflated: Security for AI and AI Security. While they may sound similar, they address two entirely different challenges.

Insights
xx
min read

The Next Step in AI Red Teaming, Automation

Red teaming is essential in security, actively probing defenses, identifying weaknesses, and assessing system resilience under simulated attacks. For organizations that manage critical infrastructure, every vulnerability poses a risk to data, services, and trust. As systems grow more complex and threats become more sophisticated, traditional red teaming encounters limits, particularly around scale and speed. To address these challenges, we built the next step in red teaming: an <a href="https://hiddenlayer.com/autortai/"><strong>Automated Red Teaming for AI solution</strong><strong> </strong>that combines intelligence and efficiency to achieve a level of depth and scalability beyond what human-led efforts alone can offer.

Insights
xx
min read

Understanding AI Data Poisoning

Today, AI is woven into everyday technology, driving everything from personalized recommendations to critical healthcare diagnostics. But what happens if the data feeding these AI models is tampered with? This is the risk posed by AI data poisoning—a targeted attack where someone intentionally manipulates training data to disrupt how AI systems operate. Far from science fiction, AI data poisoning is a growing digital security threat that can have real-world impacts on everything from personal safety to financial stability.

Insights
xx
min read

The EU AI Act: A Groundbreaking Framework for AI Regulation

Artificial intelligence (AI) has become a central part of our digital society, influencing everything from healthcare to transportation, finance, and beyond. The European Union (EU) has recognized the need to regulate AI technologies to protect citizens, foster innovation, and ensure that AI systems align with European values of privacy, safety, and accountability. In this context, the EU AI Act is the world’s first comprehensive legal framework for AI. The legislation aims to create an ecosystem of trust in AI while balancing the risks and opportunities associated with its development.

Insights
xx
min read

Key Takeaways from NIST's Recent Guidance

On July 29th, 2024, the National Institute of Standards and Technology (NIST) released critical guidance that outlines best practices for managing cybersecurity risks associated with AI models. This guidance directly ties into several comments we submitted during the open comment periods, highlighting areas where HiddenLayer effectively addresses emerging cybersecurity challenges.

Insights
xx
min read

Three Distinct Categories Of AI Red Teaming

As we’ve covered previously, AI red teaming is a highly effective means of assessing and improving the security of AI systems. The term “red teaming” appears many times throughout recent public policy briefings regarding AI.

research
xx
min read

Attack on AWS Bedrock’s ‘Titan’

research
xx
min read

ShadowLogic

Stay Ahead of AI Security Risks

Get research-driven insights, emerging threat analysis, and practical guidance on securing AI systems—delivered to your inbox.

By submitting this form, you agree to HiddenLayer's Terms of Use and acknowledge our Privacy Statement.

Thanks for your message!

We will reach back to you as soon as possible.

Oops! Something went wrong while submitting the form.