Lapshin Ivan

Ivan Lapshin

Google representatives recorded the first cyberattack using neural networks / Photo: Unsplash / Pawel Czerwinski

Google representatives recorded the first cyberattack using neural networks / Photo: Unsplash / Pawel Czerwinski

Cybercriminals have for the first time used artificial intelligence to create a hacking tool that exploits a zero-day vulnerability and attempt a large-scale attack on corporate networks, Google said. The company warned that AI is beginning to play an increasingly independent role in cyberattacks, transforming from an auxiliary tool to an active participant.

Details

The Google Threat Intelligence Group (GTIG) published a report on Ma 11 in which it reported for the first time on the use of AI to create an exploit (i.e., a program or script) to bypass a zero-day vulnerability. Such vulnerabilities are newly discovered flaws that are unknown to the creators of the code and have no ready fix (developers literally have "zero days" to prepare a defense).

GTIG said there is a "high degree of confidence" that it has detected and thwarted an attempt by hackers to plan a "mass vulnerability exploitation operation" using an AI model, which the attackers used to find a breach in a remote server administration tool and then to bypass multi-factor authentication.

Google notified the affected company before the report was published. The developer managed to release a patch before the attack took place. The tech giant did not disclose the name of the cybercrime group, the affected software or the neural network used. "It is highly likely" the exploit was not created using the Anthropic Mythos or Google Gemini model, the company said.

What's the scale?

The report says that hackers are beginning to outsource some operations to artificial intelligence: AI tools like OpenClaw are being used to find vulnerabilities, analyze targets, generate malicious code and make decisions with minimal human involvement. Google researchers see this as an early stage in the transition to more autonomous cyberattacks, where AI models are not an auxiliary tool but a full-fledged participant.

Google also revealed that state-affiliated hacker groups from China, Russia and North Korea are already experimenting with integrating AI directly into attack processes. In particular, Russian groups have tried to use AI models against Ukrainian networks, according to GTIG, and North Korean group APT45 has used AI to scale its operations, the report said.

The race to use AI to identify network vulnerabilities "has already begun," according to GTIG principal analyst John Hultquist, quoted by Politico. "Attackers are using AI to increase the speed, scale and sophistication of their attacks," he said. The identified breaches are only "the tip of the iceberg," Reuters quoted the analyst as saying.

The emergence of super-powered AI models has heightened fears that the technology could soon be used by criminals to conduct cyberattacks on an unprecedented scale. So far, Anthropic and OpenAI have allowed only a limited number of researchers, technology companies and government agencies to test their latest models, Politico recalls. Anthropic, for example, declined to publicly release the Mythos model, citing concerns that hostile entities could use it to identify and exploit software vulnerabilities.

"A phased release [of powerful AI models] is needed to create what we call a defender advantage, and we believe that window is measured in months, not years," Politico quotes Anthropic's head of cyber policy Rob Baer as saying.

This article was AI-translated and verified by a human editor

Share