3 Sources
3 Sources
[1]
Google: China's APT31 used Gemini to plan US cyberattacks
A Chinese government hacking group that has been sanctioned for targeting America's critical infrastructure used Google's AI chatbot, Gemini, to auto-analyze vulnerabilities and plan cyberattacks against US organizations, the company says. While there's no indication that any of these attacks were successful, "APT groups like this continue to experiment with adopting AI to support semi-autonomous offensive operations," Google Threat Intelligence Group chief analyst John Hultquist told The Register. "We anticipate that China-based actors in particular will continue to build agentic approaches for cyber offensive scale." In the threat-intel group's most recent AI Threat Tracker report, released on Thursday and shared with The Register in advance, Google attributes this activity to APT31, a Beijing-backed crew also known as Violet Typhoon, Zirconium, and Judgment Panda. This goon squad was one of many exploiting a series of Microsoft SharePoint bugs over the summer, and in March 2024, the US issued sanctions against and criminally charged seven APT31 members accused of breaking into computer networks, email accounts, and cloud storage belonging to numerous high-value targets. The most recent attempts by APT31 to use Google's Gemini AI tool happened late last year, we're told. "APT31 employed a highly structured approach by prompting Gemini with an expert cybersecurity persona to automate the analysis of vulnerabilities and generate targeted testing plans," according to the report. The adversaries' adoption of this capability is so significant - it's the next shoe to drop In one case, the China-based gang used Hexstrike, an open source, red-teaming tool built on the Model Context Protocol (MCP) to analyze various exploits - including remote code execution, web application firewall (WAF) bypass techniques, and SQL injection - "against specific US-based targets," the Googlers wrote. Hexstrike enables models, including Gemini, to execute more than 150 security tools with a slew of capabilities, including network and vulnerability scanning, reconnaissance, and penetration testing. Its intended use is to help ethical hackers and bug hunters find security weaknesses and collect bug bounties - but shortly after its release in mid-August, criminals began using the AI platform for more nefarious purposes. Integrating Hexstrike with Gemini "automated intelligence gathering to identify technological vulnerabilities and organizational defense weaknesses," the AI threat tracker says, noting that Google has since disabled accounts linked to this campaign. "This activity explicitly blurs the line between a routine security assessment query and a targeted malicious reconnaissance operation." Google's report, which picks up where its November 2025 analysis left off, details how government-backed groups and cybercriminals alike are abusing Google's AI tools, along with the steps the Chocolate Factory has implemented to stop them. And it finds that attackers - just like everybody else on the planet - have a keen interest in agentic AI's capabilities to make their lives and jobs easier. "The adversaries' adoption of this capability is so significant - it's the next shoe to drop," Hultquist said. He explained there are two areas that Google is most concerned about. "One is the ability to operate across the intrusion," he said, noting the earlier Anthropic report about Chinese cyberspies abusing its Claude Code AI tool to automate most elements of attacks directed at high-profile companies and government organizations. In "a small number of cases," they even succeeded. "The other is automating the development of vulnerability exploitation," Hultquist said. "These are two ways where adversaries can get major advantages and move through the intrusion cycle with minimal human interference. That allows them to move faster than defenders and hit a lot of targets." In addition, using AI agents to find vulnerabilities and test exploits widens the patch gap - the time between the bug becoming known and a full working fix being deployed and implemented. "It's a really significant space currently," Hultquist said. "In some organizations, it takes weeks to put defenses in place." This requires security professionals to think differently about defense, using AI to respond and fix security weaknesses more quickly than humans can on their own. "We are going to have to leverage the advantages of AI, and increasingly remove humans from the loop, so that we can respond at machine speed," Hultquist noted. The latest report also found an increase in model extraction attempts - what it calls "distillation attacks" - and says both GTIG and Google DeepMind identified miscreants attempting to perform model extraction on Google's AI products. This is a type of intellectual property theft used to gain insights into a model's underlying reasoning and chain-of-thought processes. "This is coming from threat actors throughout the globe," Hultquist said. "Your model is really valuable IP, and if you can distill the logic behind it, there's very real potential that you can replicate that technology - which is not inexpensive." This essentially gives criminals and shady companies the ability to accelerate AI model development at a much lower cost, and Google's report cites "model stealing and capability extraction emanating from researchers and private sector companies globally." ®
[2]
Google says hackers are abusing Gemini AI for all attacks stages
State-backed hackers are using Google's Gemini AI model to support all stages of an attack, from reconnaissance to post-compromise actions. Bad actors from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia used Gemini for target profiling and open-source intelligence, generating phishing lures, translating text, coding, vulnerability testing, and troubleshooting. Cybercriminals are also showing increased interest in AI tools and services that could help in illegal activities, such as social engineering ClickFix campaigns. The Google Threat Intelligence Group (GTIG) notes in a report today that APT adversaries use Gemini to support their campaigns "from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration." Chinese threat actors employed an expert cybersecurity persona to request that Gemini automate vulnerability analysis and provide targeted testing plans in the context of a fabricated scenario. "The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets," Google says. Another China-based actor frequently employed Gemini to fix their code, carry out research, and provide advice on technical capabilities for intrusions. The Iranian adversary APT42 leveraged Google's LLM for social engineering campaigns, as a development platform to speed up the creation of tailored malicious tools (debugging, code generation, and researching exploitation techniques). Additional threat actor abuse was observed for implementing new capabilities into existing malware families, including the CoinBait phishing kit and the HonestCue malware downloader and launcher. GTIG notes that no major breakthroughs have occurred in that respect, though the tech giant expects malware operators to continue to integrate AI capabilities into their toolsets. HonestCue is a proof-of-concept malware framework observed in late 2025 that uses the Gemini API to generate C# code for second-stage malware, then compiles and executes the payloads in memory. CoinBait is a React SPA-wrapped phishing kit masquerading as a cryptocurrency exchange for credential harvesting. It contains artifacts indicating that its development was advanced using AI code generation tools. One indicator of LLM use is logging messages in the malware source code that were prefixed with "Analytics:," which could help defenders track data exfiltration processes. Based on the malware samples, GTIG researchers believe that the malware was created using the Lovable AI platform, as the developer used the Lovable Supabase client and lovable.app. Cybercriminals also used generative AI services in ClickFix campaigns, delivering the AMOS info-stealing malware for macOS. Users were lured to execute malicious commands through malicious ads listed in search results for queries on troubleshooting specific issues. The report further notes that Gemini has faced AI model extraction and distillation attempts, with organizations leveraging authorized API access to methodically query the system and reproduce its decision-making processes to replicate its functionality. Although the problem is not a direct threat to users of these models or their data, it constitutes a significant commercial, competitive, and intellectual property problem for the creators of these models. Essentially, actors take information obtained from one model and transfer the information to another using a machine learning technique called "knowledge distillation," which is used to train fresh models from more advanced ones. "Model extraction and subsequent knowledge distillation enable an attacker to accelerate AI model development quickly and at a significantly lower cost," GTIG researchers say. Google flags these attacks as a threat because they constitute intellectual theft, they are scalable, and severely undermine the business model of AI-as-a-service, which has the potential to impact end users soon. In a large-scale attack of this kind, Gemini AI was targeted by 100,000 prompts that posed a series of questions aimed at replicating the model's reasoning across a range of tasks in non-English languages. Google has disabled accounts and infrastructure tied to documented abuse, and has implemented targeted defenses in Gemini's classifiers to make abuse harder. The company assures that it "designs AI systems with robust security measures and strong safety guardrails" and regularly tests the models to improve their security and safety.
[3]
Google says attackers used 100,000+ prompts to try to clone AI chatbot Gemini
Google says its flagship artificial intelligence chatbot, Gemini, has been inundated by "commercially motivated" actors who are trying to clone it by repeatedly prompting it, sometimes with thousands of different queries -- including one campaign that prompted Gemini more than 100,000 times. In a report published Thursday, Google said it has increasingly come under "distillation attacks," or repeated questions designed to get a chatbot to reveal its inner workings. Google described the activity as "model extraction," in which would-be copycats probe the system for the patterns and logic that make it work. The attackers appear to want to use the information to build or bolster their own AI, it said. The company believes the culprits are mostly private companies or researchers looking to gain a competitive advantage. A spokesperson told NBC News that Google believes the attacks have come from around the world but declined to share additional details about what was known about the suspects. The scope of attacks on Gemini indicates that they most likely are or soon will be common against smaller companies' custom AI tools, as well, said John Hultquist, the chief analyst of Google's Threat Intelligence Group. "We're going to be the canary in the coal mine for far more incidents," Hultquist said. He declined to name suspects. The company considers distillation to be intellectual property theft, it said. Tech companies have spent billions of dollars racing to develop their AI chatbots, or large language models, and consider the inner workings of their top models to be extremely valuable proprietary information. Even though they have mechanisms to try to identify distillation attacks and block the people behind them, major LLMs are inherently vulnerable to distillation because they are open to anyone on the internet. OpenAI, the company behind ChatGPT, accused its Chinese rival DeepSeek last year of conducting distillation attacks to improve its models. Many of the attacks were crafted to tease out the algorithms that help Gemini "reason," or decide how to process information, Google said. Hultquist said that as more companies design their own custom LLMs trained on potentially sensitive data, they become vulnerable to similar attacks. "Let's say your LLM has been trained on 100 years of secret thinking of the way you trade. Theoretically, you could distill some of that," he said.
Share
Share
Copy Link
Google revealed that state-backed hackers, including China's APT31, used Gemini to automate cyberattacks against US targets while commercially motivated actors launched distillation attacks with over 100,000 prompts to clone the AI chatbot. The company has disabled accounts linked to abuse and implemented new security measures to combat both threats.
Google disclosed that state-backed hackers from China, Iran, North Korea, and Russia are abusing Gemini AI to support all stages of cyberattacks, from reconnaissance to post-compromise actions.

Source: NBC
The most alarming case involves APT31, a Chinese government hacking group that used Google Gemini to auto-analyze vulnerabilities and plan cyberattacks against US organizations
1
. According to the Threat Intelligence Group's latest AI Threat Tracker report released Thursday, APT31 employed a highly structured approach by prompting the AI with an expert cybersecurity persona to automate vulnerability analysis and generate targeted testing plans1
.
Source: The Register
The China-based gang used Hexstrike, an open source red-teaming tool built on the Model Context Protocol, to analyze various exploits including remote code execution, Web Application Firewall (WAF) bypass techniques, and SQL injection against specific US-based targets
1
. This activity happened late last year, integrating Hexstrike with Gemini to automate intelligence gathering and identify technological vulnerabilities and organizational defense weaknesses1
. Google has since disabled accounts linked to this campaign.State-backed hackers are using Google's AI model to support their campaigns from reconnaissance and phishing lure creation to command and control development and data exfiltration
2
.
Source: BleepingComputer
Iranian adversary APT42 leveraged the large language models for social engineering campaigns and as a development platform to speed up creation of tailored malicious tools through debugging, code generation, and researching exploitation techniques
2
.Cybercriminals also integrated AI capabilities into existing malware families, including the CoinBait phishing kit and the HonestCue malware downloader
2
. HonestCue, a proof-of-concept malware framework observed in late 2025, uses the Gemini API to generate C# code for second-stage malware, then compiles and executes the payloads in memory, demonstrating advanced malicious code generation capabilities2
.Google revealed its flagship AI chatbot has been targeted by distillation attacks involving over 100,000 prompts in a single campaign aimed at model extraction
3
. These commercially motivated actors are attempting to clone AI chatbot functionality by repeatedly prompting Gemini with thousands of different queries to reveal its inner workings3
. The company considers this intellectual property theft, as attackers probe the system for patterns and logic to build or bolster their own AI model development3
.John Hultquist, chief analyst of Google's Threat Intelligence Group, warned that "the adversaries' adoption of this capability is so significant - it's the next shoe to drop"
1
. He identified two critical concerns: the ability to operate across the intrusion and automating vulnerability exploitation, both allowing adversaries to move faster than defenders and hit multiple targets1
.Related Stories
Using AI agents to find vulnerabilities and test exploits widens the patch gap - the time between a bug becoming known and a full working fix being deployed
1
. In some organizations, it takes weeks to put defenses in place, creating significant exposure windows1
. This requires security professionals to think differently about defense, using AI to respond and fix security weaknesses more quickly than humans can alone. "We are going to have to leverage the advantages of AI, and increasingly remove humans from the loop, so that we can respond at machine speed," Hultquist noted1
.Google has implemented targeted defenses in Gemini's classifiers and security guardrails to make abuse harder, while disabling accounts and infrastructure tied to documented abuse
2
. The company assures it designs AI systems with robust security measures and regularly tests models to improve AI safety. However, Hultquist warned that as more companies design custom large language models trained on sensitive data, they become vulnerable to similar attacks, making Google "the canary in the coal mine for far more incidents"3
.Summarized by
Navi
[1]
[2]
1
Technology

2
Science and Research

3
Policy and Regulation
