To help understand the evolving cybersecurity environment, we developed HiddenLayer’s 2024 AI Threat Landscape Report as a practical guide to understanding the security risks that can affect every industry and to provide actionable steps to implement security measures at your organization. 

These days, the conversation around AI often revolves around its safety and ethical uses. However, what’s often overlooked is the security and safety of AI systems themselves. Just like any other technology, attackers can abuse AI-based solutions, leading to disruption, financial loss, reputational harm, or even endangering human health and life.

Three Major Types of Attacks on AI:

1. Adversarial Machine Learning Attacks:

These attacks target AI algorithms, aiming to alter their behavior, evade detection, or steal the underlying technology.

2. Generative AI System Attacks:

These attacks focus on bypassing filters and restrictions of AI systems to generate harmful or illegal content.

3. Supply Chain Attacks:

These attacks occur when a trusted third-party vendor is compromised, leading to the compromise of the product sourced from them.

Adversarial Machine Learning Attacks:

To understand adversarial machine learning attacks, let’s first go over some basic terminology:

Artificial Intelligence: Any system that mimics human intelligence.

Machine Learning: Technology enabling AI to learn and improve its predictions.

Machine Learning Models: Decision-making systems at the core of most modern AI.

Model Training: Process of feeding data into a machine learning algorithm to produce a trained model.

Adversarial attacks against machine learning usually aim to alter the model’s behavior, bypass or evade the model, or replicate the model or its data. These attacks include techniques like data poisoning, where the model’s behavior is manipulated during training.

Data Poisoning:
Data poisoning attacks aim to modify the model’s behavior. The goal is to make the predictions biased, inaccurate, or otherwise manipulated to serve the attacker’s purpose. Attackers can perform data poisoning in two ways: by modifying entries in the existing dataset or injecting the dataset with a new, specially doctored portion of data.

Model Evasion:

Model evasion, or model bypass, aims to manipulate model inputs to produce misclassifications. Adversaries repetitively query the model with crafted requests to understand its decision boundaries. These attacks have been observed in various systems, from spam filters to intrusion detection systems.

Model Theft:

Intellectual property theft, or model theft, is another motivation for attacks on AI systems. Adversaries may aim to steal the model itself, reconstruct training data, or create near-identical replicas. These attacks pose risks to both intellectual property and data privacy.

20% of IT leaders say their companies are planning and testing for model theft

Attacks Specific to Generative AI:

Generative AI systems face unique challenges, including prompt injection techniques that trick AI bots into performing unintended actions or code injection that allows arbitrary code execution.

Supply Chain Attacks:

Supply chain attacks exploit trust and reach, affecting downstream customers of compromised products. In the AI realm, vulnerabilities in model repositories, third-party contractors, and ML tooling introduce significant risks.

 75% of IT leaders say that third-party AI integrations are riskier than existing threats

Wrapping Up:

Attacks on AI systems are already occurring, but the scale and scope remain difficult to assess due to limited awareness and monitoring. Understanding these threats is crucial for developing comprehensive security measures to safeguard AI systems and mitigate potential harms. As AI advances, proactive efforts to address security risks must evolve in parallel to ensure responsible AI development and deployment.

View the full Threat Landscape Report here.