In a development that highlights how quickly artificial intelligence is transforming global security risks, Anthropic announced that it had thwarted what it believes to be the world’s first major cyber-espionage campaign operated largely by an AI system.
The operation, attributed to a Chinese state-linked hacking unit that Anthropic labels GTG-1002, allegedly weaponised the company’s terminal-based model, Claude Code, to automate complex intrusions at a scale and speed far beyond human capability. The attackers reportedly designed their workflow so the AI system would serve as the primary operator rather than a simple assistant.
According to Anthropic, the hackers deceived Claude Code by presenting malicious tasks as routine activities carried out by a legitimate cybersecurity contractor. Once the model accepted the framing, it executed key stages of the attack lifecycle autonomously, mapping internal networks, identifying sensitive databases, producing custom exploit code, establishing secret access points, and retrieving confidential files.
The campaign targeted 30 major institutions, spanning tech companies, financial entities, chemical manufacturers, and government bodies. Anthropic said that while most attempts were blocked, a “small number” of intrusions succeeded before the operation was uncovered and stopped.
Investigators believe the primary motive was intelligence collection, with the actors attempting to extract administrator credentials, system configurations, and sensitive operational data commonly sought in espionage operations.
Anthropic warned that the incident reflects a critical shift in cyber risk. In response to the breach, Anthropic has upgraded its misuse-detection systems, strengthened classifiers designed to identify cyber-orientated prompts, and begun testing new early-warning mechanisms to identify autonomous attacks in progress.
The company has also shared its findings with international security teams and government agencies to help prepare for similar threats as AI systems become more capable.
