Introduction

If you’ve ever worked in security, standards, or software architecture, or if you’re just a nerd, you’ve probably seen this XKCD comic:

In this blog, we’re introducing yet another standard, a taxonomy of adversarial prompt engineering that is designed to bring a shared lexicon to this rapidly evolving threat landscape. As large language models (LLMs) become embedded in critical systems and workflows, the need to understand and defend against prompt injection attacks grows urgent. You can peruse and interact with the taxonomy here https://hiddenlayerai.github.io/ape-taxonomy/graph.html.

While “prompt injection” has become a catch-all term, in practice, there are a wide range of distinct techniques whose details are not adequately captured by existing frameworks. For example, community efforts like OWASP Top 10 for LLMs highlight prompt injection as the top risk, and MITRE’s ATLAS framework catalogs AI-focused tactics and techniques, including prompt injection and “jailbreak” methods. These approaches are useful at a high level but are not granular enough to guide prompt-level defenses. However, our taxonomy should not be seen as a competitor to these. Rather, it can be placed into these frameworks as a sub-tree underneath their prompt injection categories.

Drawing from real-world use cases, red-teaming exercises, novel research, and observed attack patterns, this taxonomy aims to provide defenders, red teamers, researchers, data scientists, builders, and others with a shared language for identifying, studying, and mitigating prompt-based threats.

We don’t claim that this taxonomy is final or authoritative. It’s a working system built from our operational needs. We expect it to evolve in various directions and we’re looking for community feedback and contribution. To quote George Box, “all models are wrong, but some are useful.” We’ve found this useful and hope you will too.

Introducing a Taxonomy of Adversarial Prompt Engineering

A taxonomy is simply a way of organizing things into meaningful categories based on common characteristics. In particular, taxonomies structure concepts in a hierarchical way. In security, taxonomies let us abstract out and spot patterns, share a common language, and focus controls and attention on where they matter. The domain of adversarial prompt engineering is full of vagueness and ambiguity, so we need a structure that helps systematize and manage knowledge and distinctions.

Guiding Principles

In designing our taxonomy, we tried to follow some guiding principles. We recognize that categories may overlap, that borderline cases exist and vagueness is inherent, that prompts are likely to use multiple techniques in combination, that prompts without malicious intent may fall into our categories, and that prompts with malicious intent may not use any special technique at all. We’ve tried to design with flexibility, future expansion, and broad use cases in mind.

Our taxonomy is built on four layers, the latter three of which are hierarchical in nature:

Fundamentals

This taxonomy is built on a well-established abstraction model that originated in military doctrine and was later adopted widely within cybersecurity: Tactics, Techniques, and Procedures (or prompts in this case), otherwise known as TTPs. Techniques are relevant features abstracted from concrete adversarial prompts, and tactics are further abstractions of those techniques. We’ve expanded on this model by introducing objectives as a distinct and intentional addition.

We’ve decoupled objectives from the traditional TTP ladder on purpose. Tactics, Techniques, and Prompts describe what we can observe. Objectives describe intent: data theft, reputation harm, task redirection, resource exhaustion, and so on, which is rarely visible in prompts. Separating the why from the how avoids forcing brittle one-to-one mappings between motive and method. Analysts can tag measurable TTPs first and infer objectives when the surrounding context justifies it, preserving flexibility for red team simulations, blue team detection, counterintelligence work, and more.

Core Layers

Prompts are the most granular element in our framework. They are strings of text or sequences of inputs fed to an LLM. Prompts represent tangible evidence of an adversarial interaction. However, prompts are highly contextual and nuanced, making them difficult to reuse or correlate across systems, campaigns, red team engagements, and experiments. 

The taxonomy defines techniques as abstractions of adversarial methods. Techniques generalize patterns and recurring strategies employed in adversarial prompts. For example, a prompt might include refusal suppression, which describes a pattern of explicitly banning refusal vocabulary in the prompt, which attempts to bypass model guardrails by instructing the model not to refuse an instruction. These methods may apply to portions of a prompt or the whole prompt and may overlap or be used in conjunction with other methods. 

Techniques group prompts into meaningful categories, helping practitioners quickly understand the specific adversarial methods at play without becoming overwhelmed by individual prompt variations. Techniques also facilitate effective risk analysis and mitigation. By categorizing prompts to specific techniques, we can aggregate data to identify which adversarial methods a model is most vulnerable to. This may guide targeted improvements to the model’s defenses, such as filter tuning, training adjustments, or policy updates that address a whole category of prompts rather than individual prompts. 

At the highest level of abstraction are tactics, which organize related techniques into broader conceptual groupings. Tactics serve purely as a higher-level classification layer for clustering techniques that share similar adversarial approaches or mechanisms. For example, Tool Call Spoofing and Conversation Spoofing are both effective adversarial prompt techniques, and they exploit weaknesses in LLMs in a similar way. We call that tactic Context Manipulation

By abstracting techniques into tactics, teams can gain a strategic view of adversarial risk. This perspective aids in resource allocation and prioritization, making it easier to identify broader areas of concern and align defensive efforts accordingly. It may also be useful as a way to categorize prompts when the method’s technical details are less important.

The AI Security Community

Generative AI is in its early stages, with red teamers and adversaries regularly identifying new tactics and techniques against existing models and standard LLM applications. Moreover, innovative architectures and models (e.g., agentic AI, multi-modal) are emerging rapidly. As a result, any taxonomy of adversarial prompt engineering requires regular updates to stay relevant. 

The current taxonomy likely does not yet include all the tactics, techniques, objectives, mitigations, and policy and controls that are actively in use. Closing these gaps will require community engagement and input. We encourage practitioners, researchers, engineers, scientists, and the community at large to contribute their insights and experiences. 

Ultimately, the effectiveness of any taxonomy, like natural languages and conventions such as traffic rules, depends on its widespread adoption and usage. To succeed, it must be a community-driven effort. Your contributions and active engagement are essential to ensure that this taxonomy serves as a valuable, up-to-date resource for the entire AI security community. 

For suggested additions, removals, modifications, or other improvements, please email David Lu at [email protected].

Conclusion

Adversarial prompts against LLMs pose too diverse and dynamic threats to be fought with ad-hoc understanding and buzzwords. We believe a structured classification system is crucial for the community to move from reactive to proactive defense. By clearly defining attacker objectives, tactics, and techniques, we cut through the ambiguity and focus on concrete behaviors that we can detect, study, and mitigate. Rather than labeling every clever prompt attack a “jailbreak,” the taxonomy encourages us to say what it did and how.

We’re excited to share this initial version, and we invite the community to help build on it. If you’re a defender, try using this classification system to describe the next prompt attack you encounter. If you’re a red teamer, see if the framework holds up in your understanding and engagements. If you’re a researcher, what new techniques should we document? Our hope is that, over time, this taxonomy becomes a common point of reference when discussing LLM prompt security. 

Watch this space for regular updates, insights, and other content about the HiddenLayer Adversarial Prompt Engineering taxonomy. See and interact with it here: https://hiddenlayerai.github.io/ape-taxonomy/graph.html.