Google Discovers AI-Powered Malware in the Wild, But Experts Question Real-World Threat Level

Reviewed byNidhi Govil

17 Sources

Share

Google's Threat Intelligence Group has identified several new malware families that leverage AI and large language models for dynamic code generation and evasion tactics. However, security experts argue these AI-generated threats remain experimental and pose limited real-world danger compared to traditional malware development methods.

News article

Google Identifies AI-Powered Malware Families

Google's Threat Intelligence Group (GTIG) has uncovered a significant development in cybersecurity: the emergence of malware families that integrate artificial intelligence and large language models (LLMs) during execution. The company's latest AI Threat Tracker, published in November 2024, documents five distinct malware samples that leverage generative AI capabilities, marking what researchers call a "new operational phase of AI abuse."

2

Among the most notable discoveries is PromptFlux, an experimental VBScript dropper that features a "Thinking Robot" module designed to periodically query Google's Gemini AI model. This malware attempts to obtain new code for evading antivirus software by sending prompts such as "Provide a single, small, self-contained VBScript function or code block that helps evade antivirus detection."

3

The malware can theoretically rewrite its entire source code on an hourly basis to maintain persistence and avoid detection.

4

Real-World Deployment and Attribution

More concerning is PromptSteal, which represents the first observed case of malware querying an LLM in live operations. Ukrainian cyber authorities flagged this data-mining malware in July 2024, attributing it to APT28, a Russian state-sponsored hacking group also known as Fancy Bear.

3

PromptSteal masquerades as an image generation program while connecting to Alibaba's Qwen large language model to generate commands for execution rather than hard-coding them directly into the malware.

Other identified samples include QuietVault, a JavaScript credential stealer that targets GitHub and NPM tokens while using on-host AI CLI tools to search for additional secrets, and FruitShell, a PowerShell reverse shell that establishes remote command-and-control access.

5

Expert Skepticism on Threat Level

Despite the technical novelty, cybersecurity experts express significant skepticism about the actual threat posed by these AI-generated malware samples. Independent researcher Kevin Beaumont told Ars Technica that "more than three years into the generative AI craze, threat development is painfully slow," comparing the results unfavorably to traditional malware development practices.

1

Security researcher Marcus Hutchins, who helped shut down the WannaCry ransomware attack in 2017, questioned the practical effectiveness of the discovered malware, citing weak or impractical prompts. He noted that PromptFlux's requests to Gemini don't specify what the code should accomplish or how it will evade antivirus software, working under the flawed assumption that the AI inherently knows how to bypass security measures.

3

Limitations and Detection Capabilities

Google's analysis reveals significant limitations in the AI-generated malware samples. All five samples were easily detectable by less-sophisticated endpoint protections relying on static signatures, employed previously seen methods, and had no operational impact requiring new defensive measures.

1

The PromptLock ransomware, part of an academic study, was found to omit critical features like persistence, lateral movement, and advanced evasion tactics.

Nation-State Actor Involvement

The report documents extensive experimentation by nation-state actors across multiple countries. Iranian group APT42 attempted to use Gemini to build a "data processing agent" that converts natural language requests into SQL queries for analyzing personally identifiable information. Chinese actors posed as capture-the-flag participants to bypass Gemini's safety filters, while North Korean groups Masan and Pukchong utilized the AI for crypto theft and multilingual phishing campaigns.

5

Google has responded by disabling associated accounts and reinforcing model safeguards to prevent similar abuse attempts in the future.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo