Google Gemini exploited by state hackers for cyberattacks as cloning attempts surge

Reviewed byNidhi Govil

3 Sources

Share

Google revealed that state-backed hackers, including China's APT31, used Gemini to automate cyberattacks against US targets while commercially motivated actors launched distillation attacks with over 100,000 prompts to clone the AI chatbot. The company has disabled accounts linked to abuse and implemented new security measures to combat both threats.

State-Backed Hackers Turn Google Gemini Into Attack Tool

Google disclosed that state-backed hackers from China, Iran, North Korea, and Russia are abusing Gemini AI to support all stages of cyberattacks, from reconnaissance to post-compromise actions.

Source: NBC

Source: NBC

The most alarming case involves APT31, a Chinese government hacking group that used Google Gemini to auto-analyze vulnerabilities and plan cyberattacks against US organizations

1

. According to the Threat Intelligence Group's latest AI Threat Tracker report released Thursday, APT31 employed a highly structured approach by prompting the AI with an expert cybersecurity persona to automate vulnerability analysis and generate targeted testing plans

1

.

Source: The Register

Source: The Register

The China-based gang used Hexstrike, an open source red-teaming tool built on the Model Context Protocol, to analyze various exploits including remote code execution, Web Application Firewall (WAF) bypass techniques, and SQL injection against specific US-based targets

1

. This activity happened late last year, integrating Hexstrike with Gemini to automate intelligence gathering and identify technological vulnerabilities and organizational defense weaknesses

1

. Google has since disabled accounts linked to this campaign.

Multiple Threat Actors Leverage AI Across Attack Lifecycle

State-backed hackers are using Google's AI model to support their campaigns from reconnaissance and phishing lure creation to command and control development and data exfiltration

2

.

Source: BleepingComputer

Source: BleepingComputer

Iranian adversary APT42 leveraged the large language models for social engineering campaigns and as a development platform to speed up creation of tailored malicious tools through debugging, code generation, and researching exploitation techniques

2

.

Cybercriminals also integrated AI capabilities into existing malware families, including the CoinBait phishing kit and the HonestCue malware downloader

2

. HonestCue, a proof-of-concept malware framework observed in late 2025, uses the Gemini API to generate C# code for second-stage malware, then compiles and executes the payloads in memory, demonstrating advanced malicious code generation capabilities

2

.

Massive Campaign to Clone AI Chatbot Through Distillation Attacks

Google revealed its flagship AI chatbot has been targeted by distillation attacks involving over 100,000 prompts in a single campaign aimed at model extraction

3

. These commercially motivated actors are attempting to clone AI chatbot functionality by repeatedly prompting Gemini with thousands of different queries to reveal its inner workings

3

. The company considers this intellectual property theft, as attackers probe the system for patterns and logic to build or bolster their own AI model development

3

.

John Hultquist, chief analyst of Google's Threat Intelligence Group, warned that "the adversaries' adoption of this capability is so significant - it's the next shoe to drop"

1

. He identified two critical concerns: the ability to operate across the intrusion and automating vulnerability exploitation, both allowing adversaries to move faster than defenders and hit multiple targets

1

.

Widening Patch Gap and Defense Implications

Using AI agents to find vulnerabilities and test exploits widens the patch gap - the time between a bug becoming known and a full working fix being deployed

1

. In some organizations, it takes weeks to put defenses in place, creating significant exposure windows

1

. This requires security professionals to think differently about defense, using AI to respond and fix security weaknesses more quickly than humans can alone. "We are going to have to leverage the advantages of AI, and increasingly remove humans from the loop, so that we can respond at machine speed," Hultquist noted

1

.

Google has implemented targeted defenses in Gemini's classifiers and security guardrails to make abuse harder, while disabling accounts and infrastructure tied to documented abuse

2

. The company assures it designs AI systems with robust security measures and regularly tests models to improve AI safety. However, Hultquist warned that as more companies design custom large language models trained on sensitive data, they become vulnerable to similar attacks, making Google "the canary in the coal mine for far more incidents"

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo