Google confirms hackers used AI to develop zero-day exploit for web administration tool

2 Sources

Share

Google Threat Intelligence Group has identified the first confirmed case of criminals using AI to develop a working zero-day exploit. The Python-based exploit targeted a two-factor authentication bypass in an open-source web administration tool, marking a significant escalation in AI-assisted cyber threats. While the attack was disrupted before mass deployment, the incident reveals how threat actors are industrializing AI access for faster, more sophisticated attacks.

Criminals Used AI to Build First Confirmed Zero-Day Exploit

Researchers at Google Threat Intelligence Group have documented the first verified instance where hackers used AI to develop a working zero-day exploit. The Python exploit code targeted a two-factor authentication bypass vulnerability in an unnamed open-source web administration tool, with plans for mass exploitation before Google intervened

1

. GTIG stated it has high confidence that an AI model assisted in both discovery and weaponization of the flaw, though errors in implementation likely prevented successful deployment

2

.

The exploit contained telltale signs of AI generation, including an abundance of educational docstrings, a hallucinated CVSS severity score, and structured, textbook Pythonic formatting highly characteristic of large language model training data

1

. Google has ruled out its own Gemini model as the tool used in this attack. The company notified the software developer, and a patch has been issued.

Source: BleepingComputer

Source: BleepingComputer

AI for Vulnerability Discovery Reveals New Threat Landscape

The nature of the vulnerability itself provides additional evidence of AI involvement in exploit development. The flaw was a high-level semantic logic bug where developers hardcoded a trust assumption—exactly the type of error that AI systems excel at identifying

2

. Unlike memory corruption or input sanitization issues typically uncovered through traditional fuzzing or static analysis, semantic logic flaws require reasoning about developer intent, an area where frontier large language models demonstrate particular strength.

John Hultquist, chief analyst at Google Threat Intelligence Group, warned that "there's a misconception that the AI vulnerability race is imminent. The reality is that it's already begun. For every zero-day we can trace back to AI, there are probably many more out there"

2

. This assessment suggests the confirmed case represents only the visible edge of a broader trend in AI-assisted cyber attacks.

Nation-State Affiliated Hackers Accelerate AI Adoption

Beyond this criminal case, GTIG's report reveals that Chinese and North Korean threat actors, including APT27, APT45, UNC2814, UNC5673, and UNC6201, have been leveraging AI models for vulnerability discovery and exploit development

1

. North Korean group APT45 has been observed sending thousands of repetitive prompts to recursively analyze vulnerabilities and validate proof-of-concept exploits, building an arsenal that would be impractical without AI assistance

2

.

A China-linked actor, UNC2814, used expert-persona jailbreaking techniques to push Gemini APIs into researching pre-authentication remote code execution flaws in TP-Link router firmware and Odette File Transfer Protocol implementations

2

. These sophisticated social engineering approaches to bypass AI safety features demonstrate how threat actors are adapting their tactics to extract maximum value from AI systems.

Source: SiliconANGLE

Source: SiliconANGLE

Agentic Tools and Advanced AI Misuse Tactics

The report highlights increasingly sophisticated applications of AI across the attack chain. A China-nexus actor deployed agentic tools including the Hexstrike and Strix frameworks alongside the Graphiti memory system to autonomously probe a Japanese technology firm and an East Asian cybersecurity platform, pivoting between reconnaissance tools based on internal reasoning with minimal human oversight

2

.

The PromptSpy backdoor for Android, documented by ESET earlier this year, integrates with Gemini APIs for autonomous device interaction

1

. Google researchers discovered a "GeminiAutomationAgent" module that uses hardcoded prompts to assign a benign persona, bypassing LLM safety features to calculate user interface geometry. The malware leverages AI-based capabilities to replay authentication mechanisms, including lock patterns and PINs

1

.

Russia Deploys AI Voice Cloning and Obfuscation

Russian-linked actors have embraced AI-generated decoy code to obfuscate malware families including CANFAIL and LONGSTREAM

1

. Google also documented a Russian operation codenamed "Overload," where social engineering threat actors used AI voice cloning to impersonate real journalists in fabricated videos promoting anti-Ukraine narratives targeting audiences in Ukraine, France, and the United States

1

2

.

Industrializing Access to Premium AI Models

Google warns that threat actors are now industrializing access to premium AI models through automated account creation, proxy relays, and account-pooling infrastructure

1

. GTIG also flagged the March compromise of LiteLLM, a popular AI gateway utility, by criminal group TeamPCP. The actor embedded a credential stealer through poisoned packages on PyPI and malicious pull requests, extracting AWS keys and GitHub tokens that were monetized through ransomware partnerships

2

.

To counter these threats, Google is disabling malicious accounts that abuse Gemini and deploying AI defenders such as its Big Sleep vulnerability discovery agent and CodeMender patching tool into wider use

2

. The development signals an escalating arms race where both attackers and defenders increasingly rely on AI to gain tactical advantage, with implications for cybersecurity professionals who must now account for AI-accelerated threat timelines and expanded attack surfaces.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved