7 Sources
7 Sources
[1]
The Era of AI-Generated Ransomware Has Arrived
As cybercrime surges around the world, new research increasingly shows that ransomware is evolving as a result of widely available generative AI tools. In some cases, attackers are using AI to draft more intimidating and coercive ransom notes and conduct more effective extortion attacks. But cybercriminals' use of generative AI is rapidly becoming more sophisticated. Researchers from the generative AI company Anthropic today revealed that attackers are leaning on generative AI more heavily -- sometimes entirely -- to develop actual malware and offer ransomware services to other cybercriminals. Ransomware criminals have recently been identified using Anthropic's large language model Claude and its coding-specific model, Claude Code, in the ransomware development process, according to the company's newly released threat intelligence report. Anthropic's findings add to separate research this week from the security firm ESET that highlights an apparent proof of concept for a type of ransomware attack executed entirely by local LLMs running on a malicious server. Taken together, the two sets of findings highlight how generative AI is pushing cybercrime forward and making it easier for attackers -- even those who don't have technical skills or ransomware experience -- to execute such attacks. "Our investigation revealed not merely another ransomware variant, but a transformation enabled by artificial intelligence that removes traditional technical barriers to novel malware development," researchers from Anthropic's threat intelligence team wrote. Over the last decade, ransomware has proven an intractable problem. Attackers have become increasingly ruthless and innovative so victims will keep paying out. By some estimates, the number of ransomware attacks hit record highs at the start of 2025, and criminals continue to make hundreds of millions of dollars per year. As former US National Security Agency and Cyber Command chief Paul Nakasone put it at the Defcon security conference in Las Vegas earlier this month: "We are not making progress against ransomware." Adding AI into the already hazardous ransomware cocktail only increases what hackers may be able to do. According to Anthropic's research, a cybercriminal threat actor based in the United Kingdom, which is tracked as GTG-5004 and has been active since the start of this year, used Claude to "develop, market, and distribute ransomware with advanced evasion capabilities." On cybercrime forums, GTG-5004 has been selling ransomware services ranging from $400 to $1,200, with different tools being provided for different package levels, according to Anthropic's research. The company says that while GTG-5004's products include a range of encryption capabilities, different software reliability tools, and methods designed to help the hackers avoid detection, it appears the developer is not technically skilled. "This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques, or Windows internals manipulation without Claude's assistance," the researchers write. Anthropic says it banned the account linked to the ransomware operation and introduced "new methods" for detecting and preventing malware generation on its platforms. These include using pattern detection known as YARA rules to look for malware and malware hashes that may be uploaded to its platforms.
[2]
Mysterious 'PromptLock' Ransomware Is Harnessing OpenAI's Model
Don't miss out on our latest stories. Add PCMag as a preferred source on Google. Whether for malicious purposes or simply research, someone appears to be using OpenAI's open-source model for ransomware attacks, according to antivirus company ESET. On Tuesday, ESET said it had discovered "the first known AI-powered ransomware," which the company has named PromptLock. It uses OpenAI's gpt-oss:20b model, which the company released earlier this month as one of two open-source models, meaning a user can freely use and modify the code. It can also run on high-end desktop PCs or laptops with a 16GB GPU. ESET says PromptLock runs gpt-oss:20b "locally" on an infected device to help it generate malicious code, using "hardcoded" text prompts. As evidence, the cybersecurity company posted an image of PromptLock's code that appears to show the text prompts and mentions the gpt-oss:20b model name. The ransomware will then execute the malicious code, written in the Lua programming language, to search through an infected computer, steal files, and perform encryption. "These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS," ESET warned. "Based on the detected user files, the malware may exfiltrate data, encrypt it, or potentially destroy it." ESET appears to have discovered PromptLock through malware samples uploaded to VirusTotal, a Google-owned service that catalogs malware and checks files for malicious threats. However, the current findings suggest PromptLock might simply be a "proof-of-concept" or "work-in-progress" rather than an operational attack. ESET noted that the file-destruction feature in the ransomware hasn't been implemented yet. One security researcher also tweeted that PromptLock actually belongs to them. At 13GB, the gpt-oss:20b model's size raises questions about viability. Running it could also hog the GPU's video memory. However, ESET tells PCMag that, "The attack is highly viable. The attacker does not need to download the entire gpt-oss model, which can be several gigabytes in size. Instead, they can establish a proxy or tunnel from the compromised network to a server running the model and accessible via the Ollama API. This technique, known as Internal Proxy (MITRE ATT&CK T1090.001), is commonly used in modern cyberattacks." In its research, ESET also argues that it's "our responsibility to inform the cybersecurity community about such developments." In its own statement, OpenAI said, "We thank the researchers for sharing their findings. It's very important to us that we develop our models safely. We take steps to reduce the risk of malicious use, and we're continually improving safeguards to make our models more robust against exploits. For example, you can read about our research and approach in the model card." OpenAI previously tested its more powerful source model, gpt-oss-120b, and concluded that despite fine-tuning, it "did not reach High capability in Biological and Chemical Risk or Cyber risk." Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[3]
The first AI-powered ransomware has been discovered -- "PromptLock" uses local AI to foil heuristic detection and evade API tracking
Hackers finally discover a practical use for local AI models ESET today announced the discovery of "the first known AI-powered ransomware." The ransomware in question has been dubbed PromptLock, presumably because seemingly everything related to generative AI has to be prefixed with "prompt." ESET said that this malware uses an open-weight large language model developed by OpenAI to generate scripts that can perform a variety of functions on Windows, macOS, and Linux systems while confounding defensive tools by exhibiting slightly different behavior each time. "PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption," ESET said in a Mastodon post about the malware. "Based on the detected user files, the malware may exfiltrate data, encrypt it, or potentially destroy it. Although the destruction functionality appears to be not yet implemented." Lua might seem like an odd choice of programming language for ransomware; it's mostly known for being used to develop games within Roblox or plugins for the NeoVim text editor. But it's actually a general-purpose language that offers a variety of advantages to the ransomware operators -- including good performance, cross-platform support, and a focus on simplicity that makes it well-suited to "vibe coding." It's important to remember that LLMs are non-deterministic; their output will change even if you provide the same input with the same prompt to the same model on the same device. That's maddening if you expect them to exhibit the exact same behavior over time, but ransomware operators don't necessarily want that, because it makes it easier for defensive tooling to associate patterns of behavior with known malware. PromptLock "uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly," which helps it to evade detection. The fact that the model runs locally also makes it so OpenAI can't snitch on the ransomware operators -- if they had to call an API on its servers every time they generate one of these scripts, the jig would be up. The pitfalls of vibe coding don't really apply, either, since the scripts are running on someone else's system. Maybe this will make for a decent consolation prize for AI companies. Yeah, they're facing massive lawsuits. Sure, basically nobody has seen any benefits from adopting their services. Okay, so even Meta's cutting back on its AI-related spending spree. But nobody can say that AI is useless -- it's convinced at least some ransomware operators to use local models in their warez! That counts for something, right?
[4]
First AI-powered ransomware PoC spotted
ESET malware researchers Anton Cherepanov and Peter Strycek have discovered what they describe as the "first known AI-powered ransomware," which they named PromptLock. The good news, according to the duo, who detailed PromptLock in a series of social media posts and screenshots on Tuesday, is that the malware doesn't appear to be fully functional -- yet. "Although multiple indicators suggest the sample is a proof-of-concept (PoC) or work-in-progress rather than fully operational malware deployed in the wild, we believe it is our responsibility to inform the cybersecurity community about such developments," Cherepanov and Strycek wrote. However, despite the lack of in-the-wild PromptLock infections, the discovery does show that AI has made cybercriminals' attack chains that much easier, and should serve as a warning to defenders. The PromptLock malware uses Open AI's gpt-oss-20b model, which is one of the two free open-weight models the company released earlier this month. It runs locally on an infected device through the Ollama API, and it generates malicious Lua scripts on the fly, likely to make detection more difficult. "PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption," the researchers explained, adding that the Lua scripts work on Windows, Linux, and macOS machines. The malware then decides which files to search, copy, encrypt, or even destroy, based on the file type and contents. But according to the researchers, "the destruction functionality appears to be not yet implemented." PromptLock uses the SPECK 128-bit encryption algorithm to encrypt files, and the ransomware itself is written in Go. The ESET team said they've identified both Windows and Linux variants uploaded to VirusTotal. ®
[5]
A hacker used AI to create ransomware that evades antivirus detection
Vibe coding is all the rage among enthusiasts who are using large language models (or "AI") to replace conventional software development, so it's not shocking that vibe coding has been used to power ransomware, too. According to one security research firm, they've spotted the first example of ransomware powered and enabled by an LLM -- specifically, an LLM by ChatGPT maker OpenAI. According to a blog post from ESET Research interviewing researcher Anton Cherepanov, they've detected a piece of malware "created by the OpenAI gpt-oss:20b model." PromptLock, a fairly standard ransomware package, includes embedded prompts sent to the locally stored LLM. Because of the nature of LLM outputs (which create unique, non-repeated results with each prompt), it can evade detection from standardized antivirus setups, which are designed to search for specific flags. ESET elaborates in a Mastodon post, spotted by Tom's Hardware. PromptLock uses Lua scripts to inspect files on a local system, encrypt them, and send sensitive data to a remote computer. It appears to be searching for Bitcoin information specifically, and thanks to the wide-open nature of the OpenAI model and the Ollama API, it can work on Windows, Mac, and Linux. Because gpt-oss:20b is a lightweight, open-source AI model that can run on local PC hardware, it doesn't need to call back to more elaborate systems like ChatGPT -- and as a result, it can't be outright blocked by OpenAI itself. It's written in Golang using Lua scripts, tools that would be familiar to anyone who's making games in, say, Roblox. The point being that it's possible PromptLock was created by someone with little-to-no experience in conventional programming. Though the output is variable, the prompts themselves are static, so Cherepanov says that "the current implementation does not pose a serious threat" despite its novelty. "Script kiddies are now prompt kiddies," said one Mastodon user in reply.
[6]
Warning: AI-powered ransomware is real and in the wild
As if there weren't enough privacy concerns in the world, AI ransomware is now reportedly a thing. Cybersecurity firm ESET said that it discovered the first-ever AI-powered ransomware, which it has dubbed PromptLock. "The PromptLock malware uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly, which it then executes," the company wrote. The ransomware, according to ESET, runs locally on devices via an API, meaning OpenAI cannot detect and alert that ransomware is operating. The AI-powered ransomware can generate scripts that perform functions on the devices while evading defensive tools because the AI-generated results are different each time. "Based on the detected user files, the malware may exfiltrate data, encrypt it, or potentially destroy it," ESET wrote.
[7]
AI Meets Ransomware, The New Cyber Threat
Ransomware has long been one of the most feared cyber threats on the internet, and for good reason. It's fast, disruptive, and increasingly effective at locking up your most important files and demanding payment in exchange for their return. It's not just businesses that get hit, either. Everyday people have lost family photos, tax records, financial files and entire digital histories to these attacks. But now, a new and unsettling twist is emerging: ransomware powered by artificial intelligence. In a recent case discussed by Avast researchers in the latest Gen Threat Report, a ransomware gang known as FunkSec admitted to using AI to streamline parts of their criminal operation. While the ransomware itself wasn't fully built by AI, the attackers used generative tools to assist with tasks like coding, phishing templates, and internal tooling. It's one of the first known cases of AI playing a direct role in ransomware development - and likely not the last. While AI helped FunkSec move faster, their malware wasn't perfect. In fact, a small flaw in their encryption logic became their undoing. Behind the scenes, Avast's security experts quietly discovered the flaw - a cryptographic weakness that made it possible to decrypt the locked files without paying the ransom. Working in close coordination with international law enforcement, the team developed a custom decryption tool and discreetly helped dozens of victims recover their data. Now that the FunkSec gang has gone quiet, that tool is being made available to the public for free. This marks the latest in a long line of free ransomware decryptors Avast has released - more than 40 over the past decade under the Avast and AVG brands. It's a reminder that while ransomware continues to evolve, so does our ability to fight back. Most ransomware doesn't just appear out of nowhere - it needs a way into your system. Here are some of the most common ways it spreads to everyday consumers: Ransomware often strikes without warning, but there are red flags that can tip you off early - or help you respond quickly if you've been infected: While no defence is 100% foolproof, there are several ways to reduce your risk of falling victim to ransomware: AI is already changing the cybersecurity landscape. It's making attacks faster to build and easier to launch -- even for criminals with limited technical skills. But that same technology, combined with the expertise of global threat researchers, is also being used to create smarter, faster defences. At Avast, we believe no one should have to pay to get their digital life back. That's why we continue to invest in free tools and public resources to help ransomware victims recover safely -- and why we'll keep innovating as the threat evolves.
Share
Share
Copy Link
Researchers have discovered PromptLock, the first known AI-powered ransomware, which uses OpenAI's open-source model to generate malicious code and evade detection. This development marks a significant shift in cybercrime tactics and raises concerns about the potential misuse of AI in malware creation.
In a significant development in the cybersecurity landscape, researchers have identified what appears to be the first instance of AI-powered ransomware, dubbed "PromptLock". This discovery, made by antivirus company ESET, marks a concerning evolution in the capabilities of cybercriminals
1
.Source: Tom's Hardware
PromptLock utilizes OpenAI's open-source model gpt-oss:20b, which can run locally on high-end desktop PCs or laptops with a 16GB GPU. The ransomware employs this model to generate malicious Lua scripts on the fly, enabling it to perform various functions such as enumerating the local filesystem, inspecting target files, exfiltrating data, and encrypting files
2
.PromptLock's architecture is designed to be cross-platform compatible, functioning on Windows, Linux, and macOS. It uses the SPECK 128-bit encryption algorithm and is written in Go
4
. The use of locally-run AI models allows the ransomware to evade detection by traditional antivirus software and avoid API tracking that could alert OpenAI to its malicious use3
.The emergence of AI-powered ransomware like PromptLock represents a significant shift in the cybercrime landscape. It demonstrates how generative AI is pushing cybercrime forward and lowering the barrier to entry for attackers, even those without extensive technical skills or ransomware experience
1
.Anthropic, another AI company, has reported that cybercriminals are increasingly using AI tools like their large language model Claude to develop, market, and distribute ransomware with advanced evasion capabilities. In some cases, these tools are being sold as services on cybercrime forums for prices ranging from $400 to $1,200
1
.Source: Mashable
While PromptLock's discovery is concerning, researchers note that it appears to be a proof-of-concept or work-in-progress rather than fully operational malware. Some functionalities, such as file destruction, have not yet been implemented
2
.However, security experts warn that the attack is highly viable. Even though the AI model used is large (13GB), attackers can establish a proxy or tunnel from the compromised network to a server running the model, making it a practical threat
1
.Related Stories
In response to these developments, AI companies are implementing new safeguards. Anthropic, for instance, has banned accounts linked to ransomware operations and introduced new methods for detecting and preventing malware generation on its platforms
1
.Source: PC Magazine
OpenAI, the creator of the model used in PromptLock, has stated that they take steps to reduce the risk of malicious use and are continually improving safeguards to make their models more robust against exploits
1
.The discovery of PromptLock serves as a wake-up call for the cybersecurity community. As AI technologies become more accessible, there is an increasing need for robust defense mechanisms and responsible AI development practices to mitigate the risks posed by AI-powered malware.
Summarized by
Navi
[3]
[4]