Google reports state-sponsored hackers exploit Gemini AI across all stages of cyberattacks

Reviewed byNidhi Govil

15 Sources

Share

Google's Threat Intelligence Group reveals that state-backed hackers from China, Russia, Iran, and North Korea are systematically exploiting Gemini AI for cyberattacks. One adversarial campaign bombarded the model with over 100,000 prompts attempting to clone its capabilities. The attacks span reconnaissance, phishing, malware development, and vulnerability testing, marking a shift in how AI tools enable offensive cyber operations at scale.

State-Sponsored Hackers Deploy Gemini AI in Comprehensive Cyberattacks

Google has disclosed that state-sponsored hackers from China, Russia, Iran, and North Korea are exploiting its Gemini AI model throughout every phase of cyberattacks, according to a new report from the Google Threat Intelligence Group published Thursday

1

. The revelation marks a significant escalation in how adversaries leverage AI tools for offensive operations, with threat actors using Gemini for reconnaissance, phishing lures creation, command and control development, vulnerability analysis, and data exfiltration

3

. "The adversaries' adoption of this capability is so significant - it's the next shoe to drop," said John Hultquist, chief analyst for the Google Threat Intelligence Group

4

.

Source: Tom's Hardware

Source: Tom's Hardware

APT31 Uses Gemini AI for Automated Vulnerability Analysis Against US Targets

Chinese government-backed hacking group APT31, also known as Violet Typhoon, employed a highly structured approach by prompting Gemini with an expert cybersecurity persona to automate vulnerability analysis and generate targeted testing plans against specific US-based targets

4

. In one documented case, APT31 used Hexstrike, an open-source red-teaming tool built on the Model Context Protocol, directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results

3

. This integration with Gemini "automated intelligence gathering to identify technological vulnerabilities and organizational defense weaknesses," according to Google's report

4

. While there's no indication these attacks succeeded, the activity "explicitly blurs the line between a routine security assessment query and a targeted malicious reconnaissance operation." Hultquist emphasized the concern: "The ability to operate across the intrusion and automating the development of vulnerability exploitation allows adversaries to move faster than defenders and hit a lot of targets"

4

.

Source: SiliconANGLE

Source: SiliconANGLE

Massive Model Extraction Campaign Hits Gemini with 100,000 Prompts

Beyond direct operational use, Google identified large-scale model extraction attempts targeting Gemini AI through what the industry calls distillation attacks

1

. One commercially motivated adversarial session prompted the model more than 100,000 times across various non-English languages, collecting responses to train a cheaper copycat model

1

. Google considers this intellectual property theft, though the position carries some irony given that its LLM was built from materials scraped from the Internet without permission

1

. The technique involves feeding an existing AI model thousands of carefully chosen prompts, collecting responses, and using those input-output pairs to train a smaller model that mimics the parent's behavior. "Model extraction and subsequent knowledge distillation enable an attacker to accelerate AI model development quickly and at a significantly lower cost," Google researchers noted

5

. The attacks came from around the world, though Google declined to name suspects

1

.

Source: BleepingComputer

Source: BleepingComputer

Iran and North Korea Leverage Gemini for Phishing and Social Engineering

Iranian threat actor APT42 leveraged Gemini for social engineering campaigns, using it to search for official emails of specific targets and conduct research into business partners of potential victims

3

. They fed Gemini biographical information to generate personas that might have credible reasons to engage with targets

3

. North Korea primarily used Gemini as part of phishing attacks, profiling high-value targets within security and defense companies and attempting to find vulnerable individuals within their networks

3

. Threat actors from China, Iran, Russia, and Saudi Arabia also used Gemini to produce political satire and propaganda, generating articles, memes, and images designed to influence Western audiences

3

. While Google confirmed it hadn't seen these assets deployed into the wild, the company took the threat seriously enough to disable accounts associated with these activities

3

.

Black Market for API Keys Emerges as Hackers Target AI Access

The report highlights a growing appetite among hackers for bespoke AI hacking tools and stolen API keys to access commercial models

3

. Google cited an underground toolkit called "Xanthorox," advertised as custom AI for cyber offensive campaigns capable of generating malicious malware code and constructing custom phishing campaigns. However, under the hood, Xanthorox is simply an API that leverages existing general AI models like Gemini

3

. Because using these tools requires making numerous API calls, organizations with large allocations of API tokens have become prime targets for account hijacking, creating a black market for API keys and placing greater emphasis on securing employee access to AI tools

3

. Hultquist warned that organizations must adapt: "We are going to have to leverage the advantages of AI, and increasingly remove humans from the loop, so that we can respond at machine speed"

4

. Google has since disabled accounts and infrastructure tied to documented abuse and implemented targeted defenses in Gemini's classifiers to make future exploitation harder

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo