To help understand the evolving cybersecurity environment, we developed HiddenLayer’s 2024 AI Threat Landscape Report as a practical guide to understanding the security risks that can affect every industry and to provide actionable steps to implement security measures at your organization.
Understanding Advancements in Security for AI
Understanding new technologies’ vulnerabilities is crucial before implementing security measures. Offensive security research plays a significant role in planning defenses, as initial security measures are often built on the foundation of these offensive insights.
Security for AI is no exception. Early research and tools in this field focused on offensive strategies. Initially, AI attacks were mainly explored in academic papers and through exercises by security professionals. However, there has been a significant shift in the last few years.
Offensive Security Tooling for AI
Just as in traditional IT security, offensive security tools for AI have emerged to identify and mitigate vulnerabilities. While these tools are valuable for enhancing AI system security, malicious actors can also exploit them.
Automated Attack Frameworks
Pioneering tools like CleverHans (2016) and IBM’s Adversarial Robustness Toolbox (ART, 2018) have paved the way for testing AI comprehensively. Subsequent tools such as MLSploit (2019), TextAttack (2019), Armory (2020), and Counterfit (2021) have further advanced the field, offering a variety of attack techniques to evaluate AI defenses.
Anti-Malware Evasion Tooling
Specialized tools like MalwareGym (2017) and its successor MalwareRL (2021) focus on evading AI-based anti-malware systems. These tools highlight the need for continuous improvement in security for AI measures.
Model Theft Tooling
KnockOffNets (2021) demonstrates the feasibility of AI model theft, emphasizing the importance of securing AI intellectual property.
Model Deserialization Exploitation
Fickling (2021) and Charcuterie (2022) showcase vulnerabilities in AI model serialization, underscoring the need for secure model handling practices.
Defensive Frameworks for AI
Leading cybersecurity organizations have developed comprehensive defensive frameworks to address the rising threats to AI.
MITRE ATLAS
Launched in 2021, MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) provides a knowledge base of adversarial tactics and techniques. Modeled after the MITRE ATT&CK framework, ATLAS helps professionals stay updated on AI threats and defenses.
“This survey demonstrates the prominence of real-world threats on AI-enabled systems, with 77% of participating companies reporting breaches to their AI applications this year. The MITRE ATLAS community is dedicated to characterizing and mitigating these threats in a global alliance. We applaud our community collaborators who enhance our collective ability to anticipate, prevent, and mitigate risks to AI systems, including HiddenLayer and their latest threat report.”
– Dr. Christina Liaghati, MITRE ATLAS Lead
NIST AI Risk Management Framework
Released in January 2023, the NIST AI Risk Management Framework (AI RMF) offers guidance for the responsible design, deployment, and use of AI systems, promoting trust and security in AI.
Google Secure AI Framework (SAIF)
Introduced in June 2023, SAIF outlines best practices for securing AI systems, emphasizing strong security foundations, automated defenses, and contextualized risk management.
Policies and Regulations
Global policies and regulations are being established to ensure AI’s safe and ethical use. The EU’s GDPR and AI Act, OECD AI Principles, and national frameworks like Singapore’s Model AI Governance Framework and the US’s AI Bill of Rights highlight the growing emphasis on security for AI and governance.
Concluding Thoughts
As AI technology evolves, so must the security measures that secure it. By combining offensive and defensive strategies, leveraging comprehensive frameworks, and adhering to evolving regulations, the industry can better safeguard AI systems against emerging threats. Collaboration between academia, industry, and policymakers is essential to anticipate and mitigate risks effectively.
Continuous innovation and vigilance in security for AI will be crucial in maintaining trust and reliability in AI applications, ensuring they can be safely integrated into various sectors.
View the full Threat Landscape Report here.