Google Discovers AI-Powered Malware That Rewrites Itself Using Large Language Models

Reviewed byNidhi Govil

4 Sources

Share

Google's Threat Intelligence Group has identified new malware families that leverage AI models like Gemini to dynamically modify their code during execution, marking a significant shift in cybercrime tactics toward self-evolving threats.

Revolutionary AI-Powered Malware Emerges

Google's Threat Intelligence Group (GTIG) has identified a paradigm shift in cybercrime, discovering new malware families that integrate large language models directly into their execution processes. This represents the first known instances of malware using AI for "just-in-time" self-modification, enabling dynamic alterations mid-execution that achieve unprecedented operational versatility compared to traditional malware

1

.

Source: SiliconANGLE

Source: SiliconANGLE

The most notable discovery is PromptFlux, an experimental VBScript dropper that leverages Google's Gemini LLM to generate obfuscated code variants. Its "Thinking Robot" module periodically queries Gemini to obtain new code for evading antivirus software, with one version instructing the AI to rewrite the malware's entire source code every hour

2

. The malware attempts persistence through Startup folder entries and spreads laterally across removable drives and network shares.

Active Deployment in Cyber Operations

Beyond experimental malware, Google has documented several AI-powered tools already deployed in active operations. PromptSteal, also known as LameHug, has been used by Russian military hackers in cyberattacks against Ukrainian entities since July

3

. Unlike conventional malware, PromptSteal allows hackers to interact with it using natural language prompts, built around an open-source model hosted on Hugging Face.

Other identified malware includes FruitShell, a PowerShell reverse shell with hard-coded prompts designed to bypass LLM-powered security analysis, and QuietVault, a JavaScript credential stealer that targets GitHub and NPM tokens while leveraging on-host AI CLI tools to search for additional secrets

1

.

State-Sponsored AI Abuse

Google's investigation revealed extensive abuse of AI models by state-sponsored groups across multiple nations. Chinese threat actors posed as capture-the-flag participants to bypass Gemini's safety filters, using the model to find vulnerabilities and craft phishing lures. Iranian groups MuddyCoast and APT42 pretended to be students to use Gemini for malware development, with MuddyCoast accidentally exposing command-and-control domains during debugging sessions

2

.

North Korean groups Masan and Pukchong utilized Gemini for cryptocurrency theft campaigns and developing code targeting edge devices, while China's APT41 enhanced its OSSTUN C2 framework using AI-assisted code obfuscation

4

.

Maturation of Underground AI Markets

The cybercrime marketplace for AI-powered tools has rapidly matured, with advertisements appearing in both English and Russian-speaking underground forums. These offerings range from deepfake generation utilities to comprehensive malware development services, marketed similarly to legitimate AI tools with emphasis on workflow efficiency

1

.

The subscription-based model of these services significantly lowers the technical barrier for launching sophisticated attacks, enabling even unskilled cybercriminals to deploy capabilities well beyond their native expertise

3

.

Security Response and Future Implications

Google has taken immediate action by disabling associated accounts and assets, while reinforcing model safeguards based on observed attack tactics. The company has also introduced the Secure AI Framework (SAIF) as a foundational blueprint for organizations to design, build, and deploy AI systems responsibly

4

.

Billy Leonard from Google's Threat Intelligence Group expressed particular concern about the shift toward open-source models, noting that while commercial AI services like Gemini can implement guardrails and safety features, downloaded open-source models may allow attackers to disable these protections entirely

3

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo