AI is powering the next generation of innovation. Whether driving automation, enhancing customer experiences, or enabling real-time decision-making, it has become inseparable from core business operations. However, as the value of AI systems grows, so does the incentive to exploit them.

Our 2025 Threat Report surveyed 250 IT leaders responsible for securing or developing AI initiatives. The findings confirm what many security teams already feel: AI is critical to business success, but defending it remains a work in progress.

In this blog, we highlight the top 5 threat vectors organizations are facing in 2025. These findings are grounded in firsthand insights from the field and represent a clear call to action for organizations aiming to secure their AI assets without slowing innovation.

1. Compromised Models from Public Repositories

The promise of speed and efficiency drives organizations to adopt pre-trained models from platforms like Hugging Face, AWS, and Azure. Adoption is now near-universal, with 97% of respondents reporting using models from public repositories, up 12% from the previous year.

However, this convenience comes at a cost. Only 49% scan these models for safety prior to deployment. Threat actors know this and are embedding malware or injecting malicious logic into these repositories to gain access to production environments.

📊 45% of breaches involved malware introduced through public model repositories, the most common attack type reported.

2. Third-Party GenAI Integrations: Expanding the Attack Surface

The growing reliance on external generative AI tools, from ChatGPT to Microsoft Co-Pilot, has introduced new risks into enterprise environments. These tools often integrate deeply with internal systems and data pipelines, yet few offer transparency into how they process or secure sensitive data.

Unsurprisingly, 88% of IT leaders cited third-party GenAI and agentic AI integrations as a top concern. Combined with the rise of Shadow AI, unapproved tools used outside of IT governance, reported by 72% of respondents, organizations are losing visibility and control over their AI ecosystem.

3. Exploiting AI-Powered Chatbots

As AI chatbots become embedded in both customer-facing and internal workflows, attackers are finding creative ways to manipulate them. Prompt injection, unauthorized data extraction, and behavior manipulation are just the beginning.

In 2024, 33% of reported breaches involved attacks on internal or external chatbots. These systems often lack the observability and resilience of traditional software, leaving security teams without the tooling to detect or respond effectively.

This threat vector is growing fast and is not limited to mature deployments. Even low-code chatbot integrations can introduce meaningful security and compliance risk if left unmonitored.

4. Vulnerabilities in the AI Supply Chain

AI systems are rarely built in isolation. They depend on a complex ecosystem of datasets, labeling tools, APIs, and cloud environments from model training to deployment. Each connection introduces risk.

Third-party service providers were named the second most common source of AI attacks, behind only criminal hacking groups. As attackers look for the weakest entry point, the AI supply chain offers ample opportunity for compromise.

Without clear provenance tracking, version control, and validation of third-party components, organizations may deploy AI assets with unknown origins and risks.

5. Targeted Model Theft and Business Disruption

AI models embody years of training, proprietary data, and strategic differentiation. And threat actors know it.

In 2024, the top five motivations behind AI attacks were:

  • Data theft
  • Financial gain
  • Business disruption
  • Model theft
  • Competitive advantage

Whether it’s a competitor looking for insight, a nation-state actor exploiting weaknesses, or a financially motivated group aiming to ransom proprietary models, these attacks are increasing in frequency and sophistication.

📊 51% of reported AI attacks originated in North America, followed closely by Europe (34%) and Asia (32%).

The AI Landscape Is Shifting Fast

The data shows a clear trend: AI breaches are not hypothetical. They’re happening now, and at scale:

  • 74% of IT leaders say they definitely experienced an AI breach
  • 98% believe they’ve likely experienced one
  • Yet only 32% are using a technology solution to monitor or defend their AI systems
  • And just 16% have red-teamed their models, manually or otherwise

Despite these gaps, there’s good news. 99% of organizations surveyed are prioritizing AI security in 2025, and 95% have increased their AI security budgets.

The Path Forward: Securing AI Without Slowing Innovation

Securing AI systems requires more than repurposing traditional security tools. It demands purpose-built defenses that understand machine learning models’ unique behaviors, lifecycles, and attack surfaces.

The most forward-leaning organizations are already taking action:

  • Scanning all incoming models before use
  • Creating centralized inventories and governance frameworks
  • Red teaming models to proactively identify risks
  • Collaborating across AI, security, and legal teams
  • Deploying continuous monitoring and protection tools for AI assets

At HiddenLayer, we’re helping organizations shift from reactive to proactive AI defense, protecting innovation without slowing it down.

Want the Full Picture?

Download the 2025 Threat Report to access deeper insights, benchmarks, and recommendations from 250+ IT leaders securing AI across industries.