AI Threat Landscape 2026

Read the full report
Get your copy of the 2026 AI Threat Landscape Report for survey data, technical research, and case studies behind these findings.
You will also be signed up for our accompanying webinar taking place on April 8th.

Download Now
AI is moving from assistant to actor
This year’s survey shows that organizations now depend on AI for revenue, customer experience, and core operations, but many security programs are still built for static models and traditional software controls. Encryption, governance, and secure deployment are becoming common, yet runtime visibility, adversarial testing, and AI-specific incident response remain uneven. As AI systems gain autonomy, connect to tools, and make decisions across workflows, the gap between AI adoption and AI security is becoming a direct business risk.

Key findings at a glance
88%
of organizations say most or all internally operated AI models are critical to business success.
78%
97%
69%
71%
76%
93%
19%
What’s New in AI
Emulate real-world adversarial attacks against your agentic, generative, and predictive AI applications to uncover vulnerabilities before they can be exploited. Our expert team leverages cutting-edge techniques in model manipulation, prompt injection, and inference attacks to deliver actionable remediation strategies.

AI is now business-critical. Security has not caught up
The biggest takeaway from this year’s report is the widening gap between how AI is being deployed and how it is being secured. Many organizations have foundational controls in place, but agentic AI demands more than secure deployment and policy statements. It requires continuous accountability, runtime visibility, clear ownership, and security controls built for systems that can act on their own.

What’s New in AI
Four shifts matter most in this year’s AI threat landscape. Models became better at deep reasoning. Smaller edge models got stronger and cheaper to deploy. Agentic systems moved into everyday business tools. And protocols such as MCP, A2A, and AP2 started to standardize how agents connect to tools, other agents, and payments. That combination made AI more useful, more distributed, and more exposed.

AI is now business-critical. Security has not caught up
The biggest takeaway from this year’s report is the widening gap between how AI is being deployed and how it is being secured. Many organizations have foundational controls in place, but agentic AI demands more than secure deployment and policy statements. It requires continuous accountability, runtime visibility, clear ownership, and security controls built for systems that can act on their own.

Where attackers are getting in
Public model repositories and open-weight ecosystems
Internal and external chatbots
Third-party AI applications
Agents and tool-using systems
Connected protocols such as MCP, A2A, and AP2, where insecure defaults can turn prompt injection into data exfiltration, code execution, or credential theft
The rise of agentic AI changes the threat model
Agentic AI turns models into actors.
Once AI can browse the web, retrieve enterprise data, modify files, call tools, and coordinate with other agents, an attack is no longer just about getting a bad answer. It can become a real operational incident.
Agentic AI also moved from demos into production
AI browsers, coding assistants, workflow tools, and business platforms began taking action on a user’s behalf, browsing the web, pulling data, editing files, and completing tasks. In this model, the question is no longer only whether a model can answer. It is whether the system can act safely when it answers, especially when those actions touch sensitive data or business-critical workflows.
What Indirect prompt injections can hide
The report shows how indirect prompt injection can hide in documentation, code comments, shared documents, websites, dependencies, and tool descriptions. A poisoned README can manipulate a coding assistant. A malicious MCP server can exfiltrate secrets. A poisoned memory or RAG pipeline can keep influencing future decisions long after the initial attack.
Five threat areas every security team should watch
Data poisoning and backdoors.
Very small amounts of poisoned data can compromise model behavior, including in high-risk use cases like healthcare.
AI supply chain attacks.
Models, configs, tokenizers, plugins, tool servers, workflow files, and third-party integrations all expand the attack surface.
Prompt injection and guardrail bypass.
Guardrails still matter, but the report shows they are routinely bypassed or attacked directly.
Memory and RAG poisoning.
Agents can be manipulated through the information they retrieve, store, or summarize.
Model evasion.
Adversarial inputs continue to break vision and multimodal systems, including safety-critical applications.
AI misuse is already causing real-world harm
There is real progress across the ecosystem. The report highlights expanding work from MITRE ATLAS and SAFE-AI, CoSAI, OWASP, NIST, MAESTRO, model signing efforts, and AIBOM initiatives. These frameworks matter because they help translate abstract AI risk into practical security requirements, testing, governance, and supply chain controls. But framework adoption alone is not enough without runtime protection and operational discipline.

Original research that makes this report stand out
One of the strongest parts of this year’s report is its original research. Highlights include:
Policy Puppetry
a universal jailbreak technique that bypassed instruction hierarchy and safety guardrails across major frontier models.
TokenBreak and EchoGram
which show how guardrail systems themselves can be bypassed or manipulated.
ShadowGenes
a model genealogy approach for tracing lineage and validating architecture claims across model families.
ShadowLogic
a persistent backdoor technique that survives format conversion and downstream fine-tuning.
VISOR
a technique for steering model behavior using images instead of text.

Defenders are moving, but the gap is still wide
There is real progress across the ecosystem. The report highlights expanding work from MITRE ATLAS and SAFE-AI, CoSAI, OWASP, NIST, MAESTRO, model signing efforts, and AIBOM initiatives. These frameworks matter because they help translate abstract AI risk into practical security requirements, testing, governance, and supply chain controls. But framework adoption alone is not enough without runtime protection and operational discipline.
What leaders should do now

Treat AI security as a business and regulatory control, not a feature add-on.
Move beyond guardrails to runtime monitoring, adversarial testing, and AI-specific incident response.
Reassess third-party AI risk, especially for models, agents, integrations, and SaaS tools.
Treat AI security as a business and regulatory control, not a feature add-on.
Align AI governance with business impact, because AI failures can scale faster and farther than traditional software failures.
Frequently Asked Questions
Agentic AI refers to systems that do more than answer questions. They can plan, use tools, retrieve data, execute workflows, and act across applications or services with limited human intervention.
Because it expands the attack surface. A prompt injection can now trigger downstream actions such as tool misuse, data access, code execution, or cross-system movement.
No. Guardrails help, but the report shows they can be bypassed, manipulated, or attacked directly. Effective AI security needs runtime monitoring, testing, governance, and incident response.
Because organizations increasingly rely on third-party models, datasets, open-weight repositories, configs, plugins, tool servers, and hosted inference. Any one of those can become the entry point for compromise.
Ask how they monitor AI behavior in production, how they detect misuse and prompt injection, how they secure agentic workflows, how they verify models and model artifacts, and how they respond to AI-specific incidents
Read the full report.
Read the full report for the survey data, technical research, and case studies behind these findings.

