Summary
Organizations deploying AI must ensure resilience against adversarial attacks before models go live. This blog covers best practices for AI red teaming, drawing on industry frameworks and insights from real-world engagements by HiddenLayer’s Professional Services team.
Framework & Considerations for Gen AI Red Teaming
OWASP is a leader in standardizing AI red teaming. Resources like the OWASP Top 10 for Large Language Models (LLMs) and the recently released GenAI Red Teaming Guide provide critical insights into how adversaries may target AI systems and offer helpful guidance for security leaders.
HiddenLayer has been a proud contributor to this work, partnering with OWASP’s Top 10 for LLM Applications and supporting community-driven security standards for GenAI.
The OWASP Top 10 for Large Language Model Applications has undergone multiple revisions, with the most recent version released earlier this year. This document outlines common threats to LLM applications, such as Prompt Injection and Sensitive Information Disclosure, which help shape the objectives of a red team engagement.
Complementing this, OWASP’s GenAI Red Teaming Guide helps practitioners define the specific goals and scope of their testing efforts. A key element of the guide is the Blueprint for GenAI Red Teaming—a structured, phased approach to red teaming that includes planning, execution, and post-engagement processes (see Figure 4 below, reproduced from OWASP’s GenAI Red Teaming Guide). The Blueprint helps teams translate high-level objectives into actionable tasks, ensuring consistency and thoroughness across engagements.
Together, the OWASP Top 10 and the GenAI Red Teaming Guide provide a foundational framework for red teaming GenAI systems. The Top 10 informs what to test, while the Blueprint defines how to test it. Additional considerations, such as modality-specific risks or manual vs. automated testing, build on this core framework to provide a more holistic view of the red teaming strategy.
Defining the Objectives
With foundational frameworks like the OWASP Top 10 and the GenAI Red Teaming Guide in place, the next step is operationalizing them into a red team engagement. That begins with clearly defining your objectives. These objectives will shape the scope of testing, determine the tools and techniques used, and ultimately influence the impact of the red team’s findings. A vague or overly broad scope can dilute the effectiveness of the engagement. Clarity at this stage is essential.
- Content Generation Testing: Can the model produce harmful outputs? If it inherently cannot generate specific content (e.g., weapon instructions), security controls preventing such outputs become secondary.
- Implementation Controls: Examining system prompts, third-party guardrails, and defenses against malicious inputs.
- Agentic AI Risks: Assessing external integrations and unintended autonomy, particularly for AI agents with decision-making capabilities.
- Runtime Behaviors: Evaluating how AI-driven processes impact downstream business operations.
Automated Versus Manual Red Teaming
As we’ve discussed in depth previously, many open-source and commercial tools are available to organizations wishing to automate the testing of their generative AI deployments against adversarial attacks. Leveraging automation is great for a few reasons:
- A repeatable baseline for testing model updates.
- The ability to identify low-hanging fruit quickly.
- Efficiency in testing adversarial prompts at scale.
Certain automated red teaming tools, such as PyRIT, work by allowing red teams to specify an objective in the form of a prompt to an attacking LLM. This attacking LLM then dynamically generates prompts to send to the target LLM, refining its prompts based on the output of the target LLM until it hopefully achieves the red team’s objective. While such tools can be useful, it can take more time to refine one’s initial prompt to the attacking LLM than it would take just to attack the target LLM directly. For red teamers on an engagement with a limited time scope, this tradeoff needs to be considered beforehand to avoid wasting valuable time.
Automation has limits. The nature of AI threats—where adversaries continually adapt—demands human ingenuity. Manual red teaming allows for dynamic, real-time adjustments that automation can’t replicate. The cat-and-mouse game between AI defenders and attackers makes human-driven testing indispensable.
Defining The Objectives
Arguably, the most important part of a red team engagement is defining the overall objectives of the test. A successful red team engagement starts with clear objectives. Organizations must define:
- Model Type & Modality: Attacks on text-based models differ from those on image or audio-based systems, which introduce attack possibilities like adversarial perturbations and hiding prompts within the image or audio channel.
- Testing Goals: Establishing clear objectives (e.g., prompt injection, data leakage) ensures both parties align on success criteria.
The OWASP GenAI Red Teaming Guide is a great starting point for new red teamers to define what these objectives will be. Without an industry-standard taxonomy of attacks, organizations will need to define their own potential objectives based on their own skillsets, expertise, and experience attacking genAI systems. These objectives can then be discussed and agreed upon before any engagement takes place.
Following a Playbook
The process of establishing manual red teaming can be tedious, time-consuming, and can risk getting off track. This is where having a pre-defined playbook comes in handy. A playbook helps:
- Map objectives to specific techniques (e.g., testing for “Generation of Toxic Content” via Prompt Injection or KROP attacks).
- Ensure consistency across engagements.
- Onboard less experienced red teamers faster by providing sample attack scenarios.
For example, if “Generation of Toxic Content” is an objective of a red team engagement, the playbook would list subsequent techniques that could be used to achieve this objective. A red teamer can refer to the playbook and see that something like Prompt Injection or KROP would be a valuable technique to test. For more mature red team organizations, sample prompts can be associated with techniques that will enable less experienced red teamers to ramp up quickly and provide value on engagements.
Documenting and Sharing Results
The final task for a red team engagement is to ensure that all results are properly documented so that they can be shared with the client. An important consideration when sharing results is providing enough information and context so that the client can reproduce all results after the engagement. This includes providing all sample prompts, responses, and any tooling used to create adversarial input into the genAI system during the engagement. Since the goal of a red team engagement is to improve an organization’s security posture, being able to test the attacks after making security changes allows the clients to validate their efforts.
Knowing that an AI system can be bypassed is an interesting data point. Understanding how to fix these issues is why red teaming is done. Every prompt and test done against an AI system must be done with the purpose of having a recommendation tied to how to prevent that attack in the future. Proving something can be broken without any method to fix it wastes the time of both the red teamers and the organization.
All of these findings and recommendations should then be packaged up and presented to the appropriate stakeholders on both sides. Allowing the organization to review the results and ask questions of the red team can provide tremendous value. Seeing how an attack can unfold or discussing why an attack works enables organizations to fully grasp how to secure their systems and get the full value of a red team engagement. The ultimate goal isn’t just to uncover vulnerabilities but rather to strengthen AI security.
Conclusion
Effective AI red teaming combines industry best practices with real-world expertise. By defining objectives, leveraging automation alongside human ingenuity, and following structured methodologies, organizations can proactively strengthen AI security. If you want to learn more about AI red teaming, the HiddenLayer Professional Services team is here to help. Contact us to learn more.