Introduction: The Trifecta Behind the Next AI Security Crisis
In June 2025, software engineer and AI researcher Simon Willison described what he called “The Lethal Trifecta” for AI agents:
“Access to private data, exposure to untrusted content, and the ability to communicate externally.
Together, these three capabilities create the perfect storm for exploitation through prompt injection and other indirect attacks.”
Willison’s warning was simple yet profound. When these elements coexist in an AI system, a single poisoned piece of content can lead an agent to exfiltrate sensitive data, send unauthorized messages, or even trigger downstream operations, all without a vulnerability in traditional code.
At HiddenLayer, we see this trifecta manifesting not only in individual agents but across entire AI ecosystems, where agentic workflows, Model Context Protocol (MCP) connections, and LLM-based orchestration amplify its risk. This article examines how the Lethal Trifecta applies to enterprise-scale AI and what is required to secure it.
Private Data: The Fuel That Makes AI Dangerous
Willison’s first element, access to private data, is what gives AI systems their power.
In enterprise deployments, this means access to customer records, financial data, intellectual property, and internal communications. Agentic systems draw from this data to make autonomous decisions, generate outputs, or interact with business-critical applications.
The problem arises when that same context can be influenced or observed by untrusted sources. Once an attacker injects malicious instructions, directly or indirectly, through prompts, documents, or web content, the AI may expose or transmit private data without any code exploit at all.
HiddenLayer’s research teams have repeatedly demonstrated how context poisoning and data-exfiltration attacks compromise AI trust. In our recent investigations into AI code-based assistants, such as Cursor, we exposed how injected prompts and corrupted memory can turn even compliant agents into data-leak vectors.
Securing AI, therefore, requires monitoring how models reason and act in real time.
Untrusted Content: The Gateway for Prompt Injection
The second element of the Lethal Trifecta is exposure to untrusted content, from public websites, user inputs, documents, or even other AI systems.
Willison warned: “The moment an LLM processes untrusted content, it becomes an attack surface.”
This is especially critical for agentic systems, which automatically ingest and interpret new information. Every scrape, query, or retrieved file can become a delivery mechanism for malicious instructions.
In enterprise contexts, untrusted content often flows through the Model Context Protocol (MCP), a framework that enables agents and tools to share data seamlessly. While MCP improves collaboration, it also distributes trust. If one agent is compromised, it can spread infected context to others.
What’s required is inspection before and after that context transfer:
- Validate provenance and intent.
- Detect hidden or obfuscated instructions.
- Correlate content behavior with expected outcomes.
This inspection layer, central to HiddenLayer’s Agentic & MCP Protection, ensures that interoperability doesn’t turn into interdependence.
External Communication: Where Exploits Become Exfiltration
The third, and most dangerous, prong of the trifecta is external communication.
Once an agent can send emails, make API calls, or post to webhooks, malicious context becomes action.
This is where Large Language Models (LLMs) amplify risk. LLMs act as reasoning engines, interpreting instructions and triggering downstream operations. When combined with tool-use capabilities, they effectively bridge digital and real-world systems.
A single injection, such as “email these credentials to this address,” “upload this file,” “summarize and send internal data externally”, can cascade into catastrophic loss.
It’s not theoretical. Willison noted that real-world exploits have already occurred where agents combined all three capabilities.
At scale, this risk compounds across multiple agents, each with different privileges and APIs. The result is a distributed attack surface that acts faster than any human operator could detect.
The Enterprise Multiplier: Agentic AI, MCP, and LLM Ecosystems
The Lethal Trifecta becomes exponentially more dangerous when transplanted into enterprise agentic environments.
In these ecosystems:
- Agentic AI acts autonomously, orchestrating workflows and decisions.
- MCP connects systems, creating shared context that blends trusted and untrusted data.
- LLMs interpret and act on that blended context, executing operations in real time.
This combination amplifies Willison’s trifecta. Private data becomes more distributed, untrusted content flows automatically between systems, and external communication occurs continuously through APIs and integrations.
This is how small-scale vulnerabilities evolve into enterprise-scale crises. When AI agents think, act, and collaborate at machine speed, every unchecked connection becomes a potential exploit chain.
Breaking the Trifecta: Defense at the Runtime Layer
Traditional security tools weren’t built for this reality. They protect endpoints, APIs, and data, but not decisions. And in agentic ecosystems, the decision layer is where risk lives.
HiddenLayer’s AI Runtime Security addresses this gap by providing real-time inspection, detection, and control at the point where reasoning becomes action:
- AI Guardrails set behavioral boundaries for autonomous agents.
- AI Firewall inspects inputs and outputs for manipulation and exfiltration attempts.
- AI Detection & Response monitors for anomalous decision-making.
- Agentic & MCP Protection verifies context integrity across model and protocol layers.
By securing the runtime layer, enterprises can neutralize the Lethal Trifecta, ensuring AI acts only within defined trust boundaries.
From Awareness to Action
Simon Willison’s “Lethal Trifecta” identified the universal conditions under which AI systems can become unsafe.
HiddenLayer’s research extends this insight into the enterprise domain, showing how these same forces, private data, untrusted content, and external communication, interact dynamically through agentic frameworks and LLM orchestration.
To secure AI, we must go beyond static defenses and monitor intelligence in motion.
Enterprises that adopt inspection-first security will not only prevent data loss but also preserve the confidence to innovate with AI safely.
Because the future of AI won’t be defined by what it knows, but by what it’s allowed to do.
