Summary 

For decades, the concept of red teaming has been adapted from its military roots to simulate how a threat actor could bypass defenses put in place to secure an organization. For many organizations, employing or contracting with ethical hackers to simulate attacks against their computer systems before adversaries attack is a vital strategy to understand where their weaknesses are. As Artificial Intelligence becomes integrated into everyday life, red-teaming AI systems to find and remediate security vulnerabilities specific to this technology is becoming increasingly important. 

What is AI Red Teaming

The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence defines AI red teaming as follows: 

“The term “AI red-teaming” means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.  Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.”

In traditional machine learning, the timing of the attack will dictate the tactics and techniques that can be employed. At a high level, this would either be during training time or decision time. Training time would employ techniques such as data poisoning or model tampering. On the other hand, decision, or inference, time attacks would leverage techniques such as model bypass.

The MITRE ATLAS framework offers an excellent description of the tactics and techniques that can be used against such systems, and we’ve also written about some of these techniques. In recent months, generative AI systems, such as Large Language Models (LLMs) and GPTs, have become increasingly popular. While there has yet to be a consensus on a true taxonomy of attacks against these systems, we can attempt to classify a few. Prompt Injection is probably one of the most well-known attacks against LLMs today. Yet numerous other attack techniques against LLMs exist, such as indirect prompt injection, jailbreaking, and many more. While these are the techniques, the attacker’s goal could be to generate illegal or copyrighted material, produce false or biased information, or leak sensitive data.

Red Team vs Penetration Testing vs Vulnerability Assessment

Vulnerability assessments are a more in-depth systematic review that identifies vulnerabilities within an organization or system and provides a prioritized list of findings with recommendations on how to resolve them. The important distinction here is that these assessments won’t attempt to exploit any of the discovered vulnerabilities. 

Penetration testing, often referred to as pen testing, is a more targeted attack to check for exploitable vulnerabilities. Whereas the vulnerability assessment does not attempt any exploitation, a pen testing engagement will. These are targeted and scoped by the customer or organization, sometimes based on the results of a vulnerability assessment. In the concept of AI, an organization may be particularly interested in testing if a model can be bypassed. Still, techniques such as model hijacking or data poisoning are less of a concern and would be out of scope. 

Red teaming is the process of employing a multifaceted approach to testing how well a system can withstand an attack from a real-world adversary. It is particularly used to test the efficacy of systems, including their detection and response capabilities, especially when paired with a blue team (defensive security team). These attacks can be much broader and encompass human elements such as social engineering. Typically, the goals of these types of attacks are to identify weaknesses and how long or far the engagement can succeed before being detected by the security operations team. 

Benefits of AI Red Teaming

Running through simulated attacks on your AI and ML ecosystems is critical to ensure comprehensiveness against adversarial attacks. As a data scientist, you have trained the model and tested it against real-world inputs you would expect to see and are happy with its performance. Perhaps you’ve added adversarial examples to the training data to improve comprehensiveness. This is a good start, but red teaming goes deeper by testing your model’s resistance to well-known and bleeding-edge attacks in a realistic adversary simulation. 

This is especially important in generative AI deployments due to the unpredictable nature of the output. Being able to test for harmful or otherwise unwanted content is crucial not only for safety and security but also for ensuring trust in these systems. There are many automated and open-source tools that help test for these types of vulnerabilities, such as LLMFuzzer, Garak, or PyRIT. However, these tools have drawbacks, making them no substitute for in-depth AI red teaming. Many of these tools are static prompt analyzers, meaning they use pre-written prompts, which defenses typically block as they are previously known. For the tools that use dynamic adversarial prompt generation, the task of generating a system prompt to generate adversarial prompts can be quite challenging. Some tools have “malicious” prompts that are not malicious at all. 

Real World Examples

One such engagement we conducted with a client highlights the importance of running through these types of tests with machine learning systems. This financial services institution had an AI model that identified fraudulent transactions. During the testing, we identified various ways in which an attacker could bypass their fraud models and crafted adversarial examples. Through this testing, we could work with the client and identify examples with the least amount of features modified, which provided guidance to data science teams to retrain the models that were not susceptible to such attacks. 

In this case, if adversaries could identify and exploit the same weaknesses first, it would lead to significant financial losses. By gaining insights into these weaknesses first, the client can fortify their defenses while improving their models’ comprehensiveness. Through this approach, this institution not only protects its assets but also maintains a stellar customer experience, which is crucial to its success. 

Regulations for AI Red Teaming

In October 2023, the Biden administration issued an Executive Order to ensure AI’s safe, secure, and trustworthy development and use. It provides high-level guidance on how the US government, private sector, and academia can address the risks of leveraging AI while also enabling the advancement of the technology. While this order has many components, such as

responsible innovations, protecting the American worker, and other consumer protections, one primary component is AI red teaming. 

This order requires that organizations undergo red-teaming activities to identify vulnerabilities and flaws in their AI systems. Some of the important callouts include:

  • Section 4.1(a)(ii) – Establish appropriate guidelines  to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.
  • Section 4.2(a)(i)(C) – The results of any developed dual-use foundation model’s performance in relevant AI red-team testing.
    • Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records
  • Section 10.1(b)(viii)(A) – External testing for AI, including AI red-teaming for generative AI
  • Section 10.1(b)(viii)(A) – Testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI

Another well-known framework that addresses AI Red Teaming is the NIST AI Risk Management Framework (RMF). The framework’s core provides guidelines for managing the risks of AI systems, particularly how to govern, map, measure, and manage. Although red teaming is not explicitly mentioned, section 3.3 offers valuable insights into ensuring AI systems are secure and resilient.

“Common security concerns relate to adversarial examples, data poisoning, and the exfiltration of models, training data, or other intellectual property through AI system endpoints. AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use may be said to be secure.”

The EU AI Act is a behemoth of a document, spanning more than 400 pages outlining requirements and obligations for organizations developing and using AI. The concept of red-teaming is touched on in this document as well: 

“require providers to perform the necessary model evaluations, in particular prior to its first placing on the market, including conducting and documenting adversarial testing of models, also, as appropriate, through internal or independent external testing.”

Conclusion

AI red teaming is an important strategy for any organization that is leveraging artificial intelligence. These simulations serve as a critical line of defense, testing AI systems under real-world conditions to uncover vulnerabilities before they can be exploited for malicious purposes. When conducting red teaming exercises, organizations should be prepared to examine their AI models thoroughly. This will lead to stronger and more resilient systems that can both detect and prevent these emerging attack vectors. 

Engaging in AI red teaming is not a journey you should take on alone. It is a collaborative effort that requires cyber security and data science experts to work together to find and mitigate these weaknesses. Through this collaboration, we can ensure that no organization has to face the challenges of securing AI in a silo. If you want to learn more about red-team your AI operations, we are here to help.

Join us for the  “A Guide to Red Teaming” Webinar on July 17th.

You can contact us here to learn more about our red-teaming services.