Predictions for 2025
It’s time to dust off the crystal ball once again! Over the past year, AI has truly been at the forefront of cyber security, with increased scrutiny from attackers, defenders, developers, and academia. As various forms of generative AI drive mass AI adoption, we find that the threats are not lagging far behind, with LLMs, RAGs, Agentic AI, integrations, and plugins being a hot topic for researchers and miscreants alike.
Looking ahead, we expect the AI security landscape will face even more sophisticated challenges in 2025:
- Agentic AI as a Target:
Integrating agentic AI will blur the lines between adversarial AI and traditional cyberattacks, leading to a new wave of targeted threats. Expect phishing and data leakage via agentic systems to be a hot topic. - Erosion of Trust in Digital Content:
As deepfake technologies become more accessible, audio, visual, and text-based digital content trust will face near-total erosion. Expect to see advances in AI watermarking to help combat such attacks. - Adversarial AI: Organizations will integrate adversarial machine learning (ML) into standard red team exercises, testing for AI vulnerabilities proactively before deployment.
- AI-Specific Incident Response:
For the first time, formal incident response guidelines tailored to AI systems will be developed, providing a structured approach to AI-related security breaches. Expect to see playbooks developed for AI risks. - Advanced Threat Evolution:
Fraud, misinformation, and network attacks will escalate as AI evolves across domains such as computer vision (CV), audio, and natural language processing (NLP). Expect to see attackers leveraging AI to increase both the speed and scale of attack, as well as semi-autonomous offensive models designed to aid in penetration testing and security research. - Emergence of AIPC (AI-Powered Cyberattacks):
As hardware vendors capitalize on AI with advances in bespoke chipsets and tooling to power AI technology, expect to see attacks targeting AI-capable endpoints intensify, including:- Local model tampering. Hijacking models to abuse predictions, bypass refusals, and perform harmful actions.
- Data poisoning.
- Abuse of agentic systems. For example, prompt injections in emails and documents to exploit local models.
- Exploitation of vulnerabilities in 3rd party AI libraries and models.
Recommendations for the Security Practitioner
In the 2024 threat report, we made several recommendations for organizations to consider that were similar in concept to existing security-related control practices but built specifically for AI, such as:
- Discovery and Asset Management: Identifying and cataloging AI systems and related assets.
- Risk Assessment and Threat Modeling: Evaluating potential vulnerabilities and attack vectors specific to AI.
- Data Security and Privacy: Ensuring robust protection for sensitive datasets.
- Model Robustness and Validation: Strengthening models to withstand adversarial attacks and verifying their integrity.
- Secure Development Practices: Embedding security throughout the AI development lifecycle.
- Continuous Monitoring and Incident Response: Establishing proactive detection and response mechanisms for AI-related threats.
These practices remain foundational as organizations navigate the continuously unfolding AI threat landscape.
Building on these recommendations, 2024 marked a turning point in the AI landscape. The rapid AI ‘electrification’ of industries saw nearly every IT vendor integrate or expand AI capabilities, while service providers across sectors—from HR to law firms and accountants—widely adopted AI to enhance offerings and optimize operations. This made 2024 the year that AI-related third—and fourth-party risk issues became acutely apparent.
During the Security for AI Council meeting at Black Hat this year, the subject of AI third-party risk arose. Everyone in the council acknowledged it was generally a struggle, with at least one member noting that a “requirement to notify before AI is used/embedded into a solution” clause was added in all vendor contracts. The council members who had already been asking vendors about their use of AI said those vendors didn’t have good answers. They “don’t really know,” which is not only surprising but also a noted disappointment. The group acknowledged traditional security vendors were only slightly better than others, but overall, most vendors cannot respond adequately to AI risk questions. The council then collaborated to create a detailed set of AI 3rd party risk questions. We recommend you consider adding these key questions to your existing vendor evaluation processes going forward.
- Where did your model come from?
- Do you scan your models for malicious code? How do you determine if the model is poisoned?
- Do you log and monitor model interactions?
- Do you detect, alert, and respond to mitigate risks that are identified in the OWASP LLM Top 10?
- What is your threat model for AI-related attacks? Are your threat model and mitigations mapped or aligned to the MITRE Atlas?
- What AI incident response policies does your organization have in place in the event of security incidents that impact the safety, privacy, or security of individuals or the function of the model?
- Do you validate the integrity of the data presented by your AI system and/or model?
Remember that the security landscape—and AI technology—is dynamic and rapidly changing. It’s crucial to stay informed about emerging threats and best practices. Regularly update and refine your AI-specific security program to address new challenges and vulnerabilities.
And a note of caution. In many cases, responsible and ethical AI frameworks fall short of ensuring models are secure before they go into production and after an AI system is in use. They focus on things such as biases, appropriate use, and privacy. While these are also required, don’t confuse these practices for security.