In August, Anthropic released a threat intelligence report that may mark the start of a new era in cybersecurity. The report details how cybercriminals have begun actively employing Anthropic AI solutions to conduct fraud and espionage and even develop sophisticated malware.

These criminals used AI to plan, execute, and manage entire operations. Anthropic’s disclosure confirms what many in the field anticipated: malicious actors are leveraging AI to dramatically increase the scale, speed, and sophistication of their campaigns.

The Claude Code Campaign

A single threat actor used Claude Code and its agentic execution environment to perform nearly every stage of an attack: reconnaissance, credential harvesting, exploitation, lateral movement, and data exfiltration.

Even more concerning, Claude wasn’t just carrying out technical tasks. It was instructed to analyze stolen data, craft personalized ransom notes, and determine ransom amounts based on each victim’s financial situation. The attacker provided general guidelines and tactics, but left critical decisions to the AI itself, a phenomenon security researchers are now calling “vibe hacking.”

The campaign was alarmingly effective. Within a month, multiple organizations were compromised, including healthcare providers, emergency services, government agencies, and religious institutions. Sensitive data, such as Social Security numbers and medical records, was stolen. Ransom demands ranged from tens of thousands to half a million dollars.

This was not a proof-of-concept. It was the first documented instance of a fully AI-driven cyber campaign, and it succeeded.

Why It Matters

At first glance, Anthropic’s disclosure may look like just another case of “bad actors up to no good.” In reality, it’s a watershed moment.

It demonstrates that:

  • AI itself can be the attacker.
  • A threat actor can co-opt AI they don’t own to act as a full operational team.
  • AI inside or adjacent to your network can become an attack surface.

The reality is sobering: AI provides immense latent capability, and what it becomes depends entirely on who, or what, is giving the instructions.

More Evidence of AI in Cybercrime

Anthropic’s report included additional cases of AI-enabled campaigns:

  • Ransomware-as-a-service: A UK-based actor sold AI-generated ransomware binaries online for $400–$1,200 each. According to Anthropic, the seller had no technical skills and relied entirely on AI. Script kiddies who once depended on others’ tools can now generate custom malware on demand.
  • Vietnamese infrastructure attack: Another adversary used Claude to develop custom tools and exploits mapped across the MITRE ATT&CK framework during a nine-month campaign.

These examples underscore a critical point: attackers no longer need deep technical expertise. With AI, they can automate complex operations, bypass defenses, and scale attacks globally.

The Big Picture

AI is democratizing cybercrime. Individuals with little or no coding background, disgruntled insiders, hacktivists, or even opportunistic vandals can now wield capabilities once reserved for elite, well-resourced actors.

Traditional guardrails provide little protection. Stolen login credentials and API keys give criminals anonymous access to advanced models. And compromised AI systems can act as insider threats, amplifying the reach and impact of malicious campaigns.

The latent capabilities in the underlying AI models powering most use cases are the same capabilities needed to run these cybercrime campaigns. That means enterprise chatbots and agentic systems can be co-opted to run the same campaigns Anthropic warns us about in their threat intelligence report.

The cyber threat landscape isn’t just changing. It’s accelerating.

What You Should Do

The age of AI-powered cybercrime has already arrived.

At HiddenLayer, we anticipated this shift and built a patent-protected AI security platform designed to prevent enterprises from having their AI turned against them. Securing AI is now essential, not optional. 

👉 Learn more about how HiddenLayer protects enterprises from AI-enabled threats.