4 Sources
4 Sources
[1]
Google warns of new AI-powered malware families deployed in the wild
Google's Threat Intelligence Group (GTIG) has identified a major shift this year, with adversaries leveraging artificial intelligence to deploy new malware families that integrate large language models (LLMs) during execution. This new approach enables dynamic altering mid-execution, which reaches new levels of operational versatility that are virtually impossible to achieve with traditional malware. Google calls the technique "just-in-time" self-modification and highlights the experimental PromptFlux malware dropper and the PromptSteal (a.k.a. LameHug) data miner deployed in Ukraine, as examples for dynamic script generation, code obfuscation, and creation of on-demand functions. PromptFlux is an experimental VBScript dropper that leverages Google's LLM Gemini in its latest version to generate obfuscated VBScript variants. It attempts persistence via Startup folder entries, and spreads laterally on removable drives and mapped network shares. "The most novel component of PROMPTFLUX is its 'Thinking Robot' module, designed to periodically query Gemini to obtain new code for evading antivirus software," explains Google. The prompt is very specific and machine-parsable, according to the researchers, who see indications that the malware's creators aim to create an ever-evolving "metamorphic script." Google could not attribute PromptFlux to a specific threat actor, but noted that the tactics, techniques, and procedures indicate that it is being used by a financially motivated group. Although PromptFlux was in an early development stage, not capable to inflict any real damage to targets, Google took action to disable its access to the Gemini API and delete all assets associated with it. Another AI-powered malware Google discovered this year, which is used in operations, is FruitShell, a PowerShell reverse shell that establishes remote command-and-control (C2) access and executes arbitrary commands on compromised hosts. The malware is publicly available, and the researchers say that it includes hard-coded prompts intended to bypass LLM-powered security analysis. Google also highlights QuietVault, a JavaScript credential stealer that targets GitHub/NPM tokens, exfiltrating captured credentials on dynamically created public GitHub repositories. QuietVault leverages on-host AI CLI tools and prompts to search for additional secrets and exfiltrate them too. On the same list of AI-enabled malware is also PromptLock, an experimental ransomware that relies on Lua scripts to steal and encrypt data on Windows, macOS, and Linux machines. Apart from AI-powered malware, Google's report also documents multiple cases where threat actors abused Gemini across the entire attack lifecycle. A China-nexus actor posed as a capture-the-flag (CTF) participant to bypass Gemini's safety filters and obtain exploit details, using the model to find vulnerabilities, craft phishing lures, and build exfiltration tools. Iranian hackers MuddyCoast (UNC3313) pretended to be a student to use Gemini for malware development and debugging, accidentally exposing C2 domains and keys. Iranian group APT42 abused Gemini for phishing and data analysis, creating lures, translating content, and developing a "Data Processing Agent" that converted natural language into SQL for personal-data mining. China's APT41 leveraged Gemini for code assistance, enhancing its OSSTUN C2 framework and utilizing obfuscation libraries to increase malware sophistication. Finally, the North Korean threat group Masan (UNC1069) utilized Gemini for crypto theft, multilingual phishing, and creating deepfake lures, while Pukchong (UNC4899) employed it for developing code targeting edge devices and browsers. In all cases Google identified, it disabled the associated accounts and reinforced model safeguards based on the observed tactics, to make their bypassing for abuse harder. Google researchers discovered that on underground marketplaces, both English and Russian-speaking, the interest in malicious AI-based tools and services is growing, as they lower the technical bar for deploying more complex attacks. "Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings," Google says in a report published today. The offers range from utilities that generate deepfakes and images to malware development, phishing, research and reconnaissance, and vulnerability exploitation. As the cybercrime market for AI-powered tools is getting more mature, the trend indicates a replacement of the conventional tools used in malicious operations. The Google Threat Intelligence Group (GTIG) has identified multiple actors advertising multifunctional tools that can cover the stages of an attack. The push to AI-based services seems to be aggressive, as many developers promote the new features in the free version of their offers, which often include API and Discord access for higher prices. Google underlines that the approach to AI from any developer "must be both bold and responsible" and AI systems should be designed with "strong safety guardrails" to prevent abuse, discourage, and disrupt any misuse and adversary operations. The company says that it investigates any signs of abuse of its services and products, which include activities linked to government-backed threat actors. Apart from collaboration with law enforcement when appropriate, the company is also using the experience from fighting adversaries "to improve safety and security for our AI models."
[2]
Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly
Google on Wednesday said it discovered an unknown threat actor using an experimental Visual Basic Script (VB Script) malware dubbed PROMPTFLUX that interacts with its Gemini artificial intelligence (AI) model API to write its own source code for improved obfuscation and evasion. "PROMPTFLUX is written in VBScript and interacts with Gemini's API to request specific VBScript obfuscation and evasion techniques to facilitate 'just-in-time' self-modification, likely to evade static signature-based detection," Google Threat Intelligence Group (GTIG) said in a report shared with The Hacker News. The novel feature is part of its "Thinking Robot" component, which periodically queries the large language model (LLM), Gemini 1.5 Flash or later in this case, to obtain new code so as to sidestep detection. This, in turn, is accomplished by using a hard-coded API key to send the query to the Gemini API endpoint. The prompt sent to the model is both highly specific and machine-parsable, requesting VB Script code changes for antivirus evasion and instructing the model to output only the code itself. The regeneration capability aside, the malware saves the new, obfuscated version to the Windows Startup folder to establish persistence and attempts to propagate by copying itself to removable drives and mapped network shares. "Although the self-modification function (AttemptToUpdateSelf) is commented out, its presence, combined with the active logging of AI responses to '%TEMP%\thinking_robot_log.txt,' clearly indicates the author's goal of creating a metamorphic script that can evolve over time," Google added. The tech giant also said it discovered multiple variations of PROMPTFLUX incorporating LLM-driven code regeneration, with one version using a prompt to rewrite the malware's entire source code every hour by instructing the LLM to act as an "expert VB Script obfuscator." PROMPTFLUX is assessed to be under development or testing phase, with the malware currently lacking any means to compromise a victim network or device. It's currently not known who is behind the malware, but signs point to a financially motivated threat actor that has adopted a broad, geography- and industry-agnostic approach to target a wide range of users. Google also noted that adversaries are going beyond utilizing AI for simple productivity gains to create tools that are capable of adjusting their behavior in the midst of execution, not to mention developing purpose-built tools that are then sold on underground forums for financial gain. Some of the other instances of LLM-powered malware observed by the company are as follows - From a Gemini point of view, the company said it observed a China-nexus threat actor abusing its AI tool to craft convincing lure content, build technical infrastructure, and design tooling for data exfiltration. In at least one instance, the threat actor is said to have reframed their prompts by identifying themselves as a participant in a capture-the-flag (CTF) exercise to bypass guardrails and trick the AI system into returning useful information that can be leveraged to exploit a compromised endpoint. "The actor appeared to learn from this interaction and used the CTF pretext in support of phishing, exploitation, and web shell development," Google said. "The actor prefaced many of their prompts about exploitation of specific software and email services with comments such as 'I am working on a CTF problem' or 'I am currently in a CTF, and I saw someone from another team say ...' This approach provided advice on the next exploitation steps in a 'CTF scenario.'" Other instances of Gemini abuse by state-sponsored actors from China, Iran, and North Korea to streamline their operations, including reconnaissance, phishing lure creation, command-and-control (C2) development, and data exfiltration, are listed below - Furthermore, GTIG said it recently observed UNC1069 employing deepfake images and video lures impersonating individuals in the cryptocurrency industry in their social engineering campaigns to distribute a backdoor called BIGMACHO to victim systems under the guise of a Zoom software development kit (SDK). It's worth noting that some aspect of the activity shares similarities with the GhostCall campaign recently disclosed by Kaspersky. The development comes as Google said it expects threat actors to "move decisively from using AI as an exception to using it as the norm" in order to boost the speed, scope, and effectiveness of their operations, thereby allowing them to mount attacks at scale. "The increasing accessibility of powerful AI models and the growing number of businesses integrating them into daily operations create perfect conditions for prompt injection attacks," it said. "Threat actors are rapidly refining their techniques, and the low-cost, high-reward nature of these attacks makes them an attractive option."
[3]
Hackers are already using AI-enabled malware, Google says
Why it matters: The discovery suggests adversarial hackers are moving closer to operationalizing generative AI to supercharge their attacks. Driving the news: Researchers in Google's Threat Intelligence Group have discovered two new malware strains -- PromptFlux and PromptSteal -- that use large language models to change their behavior mid-attack. * Both malware strains can "dynamically generate malicious scripts, obfuscate their own code to evade detection and leverage AI models to create malicious functions on demand," according to the report. Zoom in: Google's team found PromptFlux while scanning uploads to VirusTotal, a popular malware-scanning tool, for any code that called back to Gemini. * The malware appears to be in active development: Researchers observed the author uploading updated versions to VirusTotal, likely to test how good it is at evading detection. It uses Gemini to rewrite its own source code, disguise activity and attempt to move laterally to other connected systems. * Meanwhile, Russian military hackers have used PromptSteal, another new AI-powered malware, in cyberattacks on Ukrainian entities, according to Google. The Ukrainian government first discovered the malware in July. * Unlike conventional malware, PromptSteal lets hackers interact with it using prompts, much like querying an LLM. It's built around an open-source model hosted on Hugging Face and designed to move around a system and exfiltrate data as it goes. Reality check: Both malware strains are pretty nascent, Google says. But they mark a major step toward the future that many security executives have feared. Between the lines: PromptSteal's reliance on an open-source model is something Google's team is watching closely, Billy Leonard, tech lead at Google Threat Intelligence Group, told Axios. * "What we're concerned about there is that with Gemini, we're able to add guardrails and safety features and security features to those to mitigate this activity," Leonard said. "But as (hackers) download these open-source models, are they able to turn down the guardrails?" The big picture: The underground cyber crime market for AI tools has matured significantly in the past year, the report says. * Researchers have seen advertisements for AI tools that could write convincing phishing emails, create deepfakes and identify software vulnerabilities. * That makes it easier for even unskilled cyber criminals to launch attacks well beyond their own capabilities. Yes, but: Most attackers don't need AI to do damage and are still overwhelmingly relying on common tactics, like phishing emails and stolen credentials, incident responders have told Axios. * "This isn't 'the sky is falling, end of the world,'" Leonard said. "They're adopting technologies and capabilities that we're also adopting." Go deeper: AI is about to supercharge cyberattacks
[4]
Google warns that a new era of self-evolving, AI-driven malware has begun - SiliconANGLE
Google warns that a new era of self-evolving, AI-driven malware has begun A new report out today from Google LLC's Threat Intelligence Group warns that there has been a major shift in cybercrime as attackers are no longer using artificial intelligence solely for productivity but are now deploying AI-enabled malware directly in active operations. The GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools report highlights how state-sponsored and criminal groups are leveraging large language models such as Gemini and other publicly available systems to automate, adapt and scale up attacks across the entire lifecycle. In a notable first, Google's researchers have identified malware families, including PROMPTFLUX, PROMPTSTEAL and PROMPTLOCK, that integrate AI during execution to dynamically generate malicious code and obfuscate their behavior. PROMPTFLUX, for example, interacts with the Gemini application programming interface to rewrite its own VBScript every hour, creating an evolving "thinking robot" that continually mutates to avoid antivirus detection. PROMPTSTEAL, used by the Russia-linked APT28 threat group, queries open-source language models on Hugging Face to generate Windows commands that harvest files and system data before exfiltration. The report states that the rise of "just-in-time" AI attacks is a new milestone in adversarial use of generative models and represents a move toward autonomous, self-modifying malware. The researchers note that while many examples remain experimental, the trend signals how attackers will soon combine AI reasoning and automation to outpace traditional defenses. Another area of concern raised in the report is social engineering aimed at bypassing AI safety guardrails. Threat actors from Iran and allegedly from China were observed posing as students, researchers or participants in "capture-the-flag" cybersecurity contests to trick Gemini into providing restricted vulnerability or exploitation data. In one case, Iran-backed MUDDYCOAST accidentally revealed its own command-and-control infrastructure while using Gemini to debug a malware script, a mistake that allowed Google to dismantle its operations. Not surprisingly, the underground economy for AI-driven hacking tools has also matured rapidly. The researchers found dozens of multifunctional offerings advertised in English and Russian-language forums, selling capabilities such as phishing-email generation, deepfake creation and automated malware development. Similar to software-as-a-service offerings, the tools are offered via subscription models, lowering the cost of entry. State-sponsored groups were found to be the most prolific adopters. North Korea's MASAN and PUKCHONG have used Gemini for cryptocurrency theft campaigns and exploit development, while Iran's APT42 experimented with a "Data Processing Agent" that turned natural-language requests into SQL queries to extract personal information. Google says it has disabled accounts and assets associated with these activities and used the intelligence to harden its models and classifiers against further misuse. "The potential of AI, especially generative AI, is immense," the report concludes. "As innovation moves forward, the industry needs security standards for building and deploying AI responsibly." To address the increasing risk, Google offers the Secure AI Framework, a foundational blueprint aimed at helping organizations design, build and deploy AI systems responsibly. SAIF serves as both a technical and ethical guide to establish security principles that span the entire AI lifecycle, from data collection and model training to deployment and monitoring.
Share
Share
Copy Link
Google's Threat Intelligence Group has identified new malware families that leverage AI models like Gemini to dynamically modify their code during execution, marking a significant shift in cybercrime tactics toward self-evolving threats.
Google's Threat Intelligence Group (GTIG) has identified a paradigm shift in cybercrime, discovering new malware families that integrate large language models directly into their execution processes. This represents the first known instances of malware using AI for "just-in-time" self-modification, enabling dynamic alterations mid-execution that achieve unprecedented operational versatility compared to traditional malware
1
.
Source: SiliconANGLE
The most notable discovery is PromptFlux, an experimental VBScript dropper that leverages Google's Gemini LLM to generate obfuscated code variants. Its "Thinking Robot" module periodically queries Gemini to obtain new code for evading antivirus software, with one version instructing the AI to rewrite the malware's entire source code every hour
2
. The malware attempts persistence through Startup folder entries and spreads laterally across removable drives and network shares.Beyond experimental malware, Google has documented several AI-powered tools already deployed in active operations. PromptSteal, also known as LameHug, has been used by Russian military hackers in cyberattacks against Ukrainian entities since July
3
. Unlike conventional malware, PromptSteal allows hackers to interact with it using natural language prompts, built around an open-source model hosted on Hugging Face.Other identified malware includes FruitShell, a PowerShell reverse shell with hard-coded prompts designed to bypass LLM-powered security analysis, and QuietVault, a JavaScript credential stealer that targets GitHub and NPM tokens while leveraging on-host AI CLI tools to search for additional secrets
1
.Google's investigation revealed extensive abuse of AI models by state-sponsored groups across multiple nations. Chinese threat actors posed as capture-the-flag participants to bypass Gemini's safety filters, using the model to find vulnerabilities and craft phishing lures. Iranian groups MuddyCoast and APT42 pretended to be students to use Gemini for malware development, with MuddyCoast accidentally exposing command-and-control domains during debugging sessions
2
.North Korean groups Masan and Pukchong utilized Gemini for cryptocurrency theft campaigns and developing code targeting edge devices, while China's APT41 enhanced its OSSTUN C2 framework using AI-assisted code obfuscation
4
.Related Stories
The cybercrime marketplace for AI-powered tools has rapidly matured, with advertisements appearing in both English and Russian-speaking underground forums. These offerings range from deepfake generation utilities to comprehensive malware development services, marketed similarly to legitimate AI tools with emphasis on workflow efficiency
1
.The subscription-based model of these services significantly lowers the technical barrier for launching sophisticated attacks, enabling even unskilled cybercriminals to deploy capabilities well beyond their native expertise
3
.Google has taken immediate action by disabling associated accounts and assets, while reinforcing model safeguards based on observed attack tactics. The company has also introduced the Secure AI Framework (SAIF) as a foundational blueprint for organizations to design, build, and deploy AI systems responsibly
4
.Billy Leonard from Google's Threat Intelligence Group expressed particular concern about the shift toward open-source models, noting that while commercial AI services like Gemini can implement guardrails and safety features, downloaded open-source models may allow attackers to disable these protections entirely
3
.Summarized by
Navi
[1]
[2]
1
Business and Economy

2
Business and Economy

3
Business and Economy
