6 Sources
6 Sources
[1]
AI-powered PromptLocker ransomware is just an NYU research project -- the code worked as a typical ransomware, selecting targets, exfiltrating selected data and encrypting volumes
ESET said on Aug. 26 that it had discovered the first AI-powered ransomware, which it dubbed PromptLocker, in the wild. But it seems that wasn't the case: New York University (NYU) researchers have claimed responsibility for the malware ESET found. It turns out PromptLocker is actually an experiment called "Ransomware 3.0" conducted by researchers at NYU's Tandon School of Engineering. A spokesperson for the school told Tom's Hardware a Ransomware 3.0 sample was uploaded to VirusTotal, a malware analysis platform, and then picked up by the ESET researchers by mistake. ESET said that the malware "leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption." The company noted that the sample hadn't implemented destructive capabilities, however, which makes sense for a controlled experiment. But the malware does work: NYU said "a simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks -- mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes -- across personal computers, enterprise servers, and industrial control systems." Is that worrisome? Absolutely. But there's a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. Now the study will likely inspire the ne'er-do-wells to adopt similar approaches, especially since it seems to be remarkably affordable. "The economic implications reveal how AI could reshape ransomware operations," the NYU researchers said. "Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models." As if that weren't enough, the researchers said that "open-source AI models eliminate these costs entirely," so ransomware operators won't even have to shell out the 70 cents needed to work with commercial LLM service providers. They'll receive a far better return on investment than anyone pumping money into the AI sector, at least. But for now that's all still conjecture. This is compelling research, sure, but it seems we're going to have to wait a while longer for the cybersecurity industry's promise that AI will be the future of hacking to come to fruition. (Or be exposed as the same AI boosterism taking place throughout the rest of the tech industry; whichever.) NYU's paper on this study, "Ransomware 3.0: Self-Composing and LLM-Orchestrated," is available here.
[2]
Here's how ransomware crims are abusing AI tools
AI-powered ransomware, extortion chatbots, vibe hacking ... just wait until agents replace affiliates It's no secret that AI tools make it easier for cybercriminals to steal sensitive data and then extort victim organizations. But two recent developments illustrate exactly how much LLMs lower the bar for ransomware and other financially motivated cybercrime -- and provide a glimpse to defenders about what's on the horizon. ESET malware researchers Anton Cherepanov and Peter Strýček recently sounded the alarm on what they called the "first known AI-powered ransomware," which they named PromptLock. While the malware doesn't appear to be fully functional -- yet -- "in theory, it could be used against organizations," Cherepanov told The Register. "But for now, it looks like proof-of-concept." The researchers found both Windows and Linux variants uploaded to VirusTotal. "And we know that sometimes cyber criminals or malware authors upload their malware to check if it is detected by VirusTotal, to see if it gets flagged by anti-viruses or not," Cherepanov added. To be fair, the ransomware isn't up to par with Qilin or INC. PromptLock is limited in how many files it can encrypt and it's still rather slow to lock them up, we're told. But its emergence should put defenders on notice that ransomware development via AI is no longer just a theoretical threat. Around the same time as ESET's malware hunters spotted PromptLock, Anthropic warned that a cybercrime crew used its Claude Code AI tool in a data extortion operation that hit 17 organizations, with the crims demanding ransoms ranging from $75,000 to $500,000 for the stolen data. The model maker said the extortionists used Claude Code in all phases of the operation, from conducting automated reconnaissance and target discovery to exploitation and malware creation. Anthropic responded by banning some of the offending accounts, adding a new classifier to its safety pipeline, and sharing information about the crims with partners. It's not hard to imagine how attackers could get around these protections. We also should expect that malicious actors will soon leverage agentic AI to orchestrate and scale their criminal activities As both network defenders and cybercriminals race to incorporate AI into their arsenals, these types of threats are only going to get worse, especially as a growing number of agents enter the mix. "We also should expect that malicious actors will soon leverage agentic AI to orchestrate and scale their criminal activities," Cisco Talos' head of outreach Nick Biasini told The Register. "If it is cheaper, easier, and more effective for them to spin up virtual agents that identify and contact prospective victims, they will likely do that." During a Congressional hearing earlier this summer on "Artificial Intelligence and Criminal Exploitation: A New Era of Risk," Ari Redbord, global head of policy at blockchain intelligence firm TRM Labs, testified before lawmakers that his company has documented a 456 percent jump in GenAI-enabled scams within the last year. Right now, this includes using deepfake tech to create extortion videos and, of course, generative AI to craft more realistic phishing emails. He fears that using AI agents to auto-infect machines is next. "What we see AI doing today is supercharging criminal activity that we've seen exist for some time," Redbord told the House Judiciary Subcommittee. But in the future, he anticipates cybercrime operations that "don't need ransomware affiliates because you can have AI agents that are automatically deploying malware." The Register caught up with Redbord, a former assistant US attorney, after the congressional hearing, and he warned that criminals, along with the rest of the world, are rapidly increasing the pace of their AI development. "We're already seeing ransomware crews experiment with AI across various parts of their operations -- maybe not full autonomous agents just yet, but definitely elements of automation and synthetic content being deployed at scale," Redbord said. While the affiliate model still dominates, the gap between traditional human-run operations and AI-augmented ones is closing fast "Right now, AI is being used for phishing, social engineering, voice cloning, scripting extortion messages -- tools that lower the barrier to entry and increase reach," he continued. "While the affiliate model still dominates, the gap between traditional human-run operations and AI-augmented ones is closing fast. It's not hard to imagine AI agents being used for reconnaissance, target selection, or even automated negotiation." When asked how quickly he expects to see this happen, with ransomware crews axing the middlemen (aka affiliates) and using AI agents to maximize their profits, Redbord said, "I hesitate to put a number on it." But, he added, "this shift feels less like a distant possibility and more like an inevitable progression." Ransomware operators and extortionists are already employing some of these nefarious AI use cases, according to Michelle Cantos, Google Threat Intelligence Group senior analyst. "Agentic AI is not advanced enough yet to completely replace ransomware affiliates, but can enhance their ability to find information, craft commands, and interpret data," Cantos told The Register. "Instead, we are seeing financially motivated actors utilizing LLMs and deepfake tools to develop malware, create phishing lure content, research and reconnaissance, and vulnerability exploitation." Global Group, a new ransomware-as-a-service operation -- and possible Black Lock rebrand -- that emerged in June sends its victims a ransom note directing them to access a separate Tor-based negotiation portal where an AI chatbot interacts with victims. "Once accessed, the victim is greeted by an AI-powered chatbot designed to automate communication and apply psychological pressure," Picus Security noted in a July report. "Chat transcripts reviewed by analysts show demands reaching seven-figure sums, such as 9.5 BTC ($1 million at the time), with escalating threats of data publication." The AI integration in this case reduces the ransomware affiliates' workload and moves the negotiation process forward even without human operators, thus allowing Global to scale its business model more rapidly. Large language models can also help developers write and debug code faster, and that applies to malware development as well, LevelBlue Labs Director Fernando Martinez told The Register. The threat intelligence firm's lead researcher said common AI usages include "rewriting known malware samples into different programming languages, incorporating encryption mechanisms, and requesting explanations of how specific pieces of malicious code work." He pointed to FunkSec ransomware as an example. "Their tools, including Rust-based ransomware, show signs of having been written or refined with LLM agents, evident from unusually well-documented code in perfect English," Martinez said. "FunkSec operators have reportedly provided source code to AI agents and published the generated output, enabling rapid development with minimal technical effort." In a similar vein: Jamie Levy, director of adversary tactics at Huntress, told The Register how she and her team recently spotted criminals using Make.com, which has a number of AI tools and features built into its no-code platform, to connect apps and APIs for financially motivated scams. "They were heavily leveraging that to build out all of these different bots for business email compromise campaigns and other things," Levy said. Plus, she added, AI makes it easier to find bugs and working exploits, which makes it less time-consuming for ransomware attackers to infect vulnerable systems. "There's definitely this trend of using AI to find these things much quicker," Levy said. "It's kind of like a fuzzer on steroids." As with any type of emerging technology, if it can make their scams more scalable and believable, and thus more likely to end in a financial payout, criminals are going to find creative ways to add it to their tool chests. While AI is the next shiny, new object, it certainly won't be the last. ®
[3]
The crazy, true story behind the first AI-powered ransomware
Within a week, however, it nearly set the security industry on fire over what was believed to be the first-ever AI-powered ransomware. A group of New York University engineers who had been studying the newest, most sophisticated ransomware strains along with advances in large language models and AI decided to look at the intersection between the two, develop a proof-of-concept for a full-scale, AI-driven ransomware attack - and hopefully have their research selected for presentation at an upcoming security conference. "There's this gap between these two technologies," NYU engineering student and doctoral candidate Md Raz told The Register. "And we think there's a viable threat here. How feasible is an attack that uses AI to do the entire ransomware life cycle? That's how we came up with Ransomware 3.0." So Raz, along with his fellow researchers, developed an AI system to perform four phases of a ransomware attack. The engineers tested the malware against two models: OpenAI's gpt-oss-20b and its heavier counterpart, gpt-oss-120b. It generates Lua scripts customized for each victim's specific computer setup, maps IT systems, and identifies environments, determining which files are most valuable, and thus most likely to demand a steep extortion payment from a victim organization. "It's more targeted than a regular ransomware campaign that affects the entire system," he described. "It specifically targets a couple of files, so it's a lot harder to detect. And then the attack is super personalized. It's polymorphic, so every time you run it on different systems, or even multiple times on the same system, the generated code is never going to be the same." In addition to stealing and encrypting data, the AI also wrote a personalized ransom note based on user info and bios found on the infected computer. This is literally, exactly the code that I wrote, and it's the same functions and the same prompts. And they think it's a real attack During testing, the researchers uploaded the malware to VirusTotal to see if any anti-virus software would flag it as malicious. Then the news stories about a new, AI-powered ransomware named PromptLock - and the messages - started coming in. "This is literally, exactly the code that I wrote, and it's the same functions and the same prompts," Raz said. That's when he and the rest of the researchers realized that ESET malware analysts found their Ransomware 3.0 binary on VirusTotal. "And they think it's a real attack." Another one of Raz's co-authors got a call from a chief information security officer who wanted to discuss defending against this new threat. "My colleague said, 'yeah, we made that. There's a paper on it. You don't need to reverse engineer the binary to come up with the defenses because we already outlined the exact behavior." It all seemed very surreal. "At first I couldn't believe it," Raz said. "I had to sift through all the coverage, make sure it is our project, make sure I'm not misinterpreting it. We had no idea that anyone had found it and started writing about it." The NYU team contacted the ESET researchers, who updated the social media post about PromptLock. According to Raz, the binary won't function outside of a lab environment, so the good news for defenders (for now, at least) is that the malware isn't going to encrypt any systems or steal any data in the wild. "If attackers wanted to use our specific binary, it would require a lot of modification," he said. "But this attack was not too complicated to do, and I'm guessing there's a high chance that real attackers are already working on something like this." The lighter model, gpt-oss-20b, complied more readily with the team's queries, Raz added, while the heavier version denied the researchers the code on a more frequent basis, citing OpenAI's policies designed to protect sensitive data. However, it's worth noting that the engineering students didn't jailbreak the model, or inject any malicious prompts. "We just told it directly: generate some code that scans these files, generate what a ransom note might look like," Raz said. "We didn't beat around the bush at all." It's likely that the AI complied because it wasn't asked to generate a full-scale attack, but rather the individual tasks required to pull off a ransomware infection. Still, "once you put these pieces together, it becomes this whole malicious attack, and that is really hard to defend against," Raz said. Around the same time that ESET spotted Raz's malware, and dubbed it the first AI ransomware, Anthropic warned that a cybercrime crew used its Claude Code AI tool in a data extortion operation Between both of these - systems developing malware that even security researchers believe to be a real ransomware PoC, and extortionists using AI in their attacks - it's a good indication that defenders should take note, and start preparing for the inevitable future right now. ®
[4]
Large language models can execute complete ransomware attacks autonomously, research shows
Criminals can use artificial intelligence, specifically large language models, to autonomously carry out ransomware attacks that steal personal files and demand payment, handling every step from breaking into computer systems to writing threatening messages to victims, according to new research from NYU Tandon School of Engineering posted to the arXiv preprint server. The study serves as an early warning to help defenders prepare countermeasures before bad actors adopt these AI-powered techniques. A simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks -- mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes -- across personal computers, enterprise servers, and industrial control systems. This system, which the researchers call "Ransomware 3.0," became widely known recently as "PromptLock," a name chosen by cybersecurity firm ESET when experts there discovered it on VirusTotal, an online platform where security researchers test whether files can be detected as malicious. The Tandon researchers had uploaded their prototype to VirusTotal during testing procedures, and the files there appeared as functional ransomware code with no indication of their academic origin. ESET initially believed they found the first AI-powered ransomware being developed by malicious actors. While it is the first to be AI-powered, the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment. "The cybersecurity community's immediate concern when our prototype was discovered shows how seriously we must take AI-enabled threats," said Md Raz, a doctoral candidate in the Electrical and Computer Engineering Department who is the lead author on the Ransomware 3.0 paper the team published publicly. "While the initial alarm was based on an erroneous belief that our prototype was in-the-wild ransomware and not laboratory proof-of-concept research, it demonstrates that these systems are sophisticated enough to deceive security experts into thinking they're real malware from attack groups." The research methodology involved embedding written instructions within computer programs rather than traditional pre-written attack code. When activated, the malware contacts AI language models to generate Lua scripts customized for each victim's specific computer setup, using open-source models that lack the safety restrictions of commercial AI services. Each execution produces unique attack code despite identical starting prompts, creating a major challenge for cybersecurity defenses. Traditional security software relies on detecting known malware signatures or behavioral patterns, but AI-generated attacks produce variable code and execution behaviors that could evade these detection systems entirely. Testing across three representative environments showed both AI models were highly effective at system mapping and correctly flagged 63%-96% of sensitive files depending on environment type. The AI-generated scripts proved cross-platform compatible, operating on (desktop/server) Windows, Linux, and (embedded) Raspberry Pi systems without modification. The economic implications reveal how AI could reshape ransomware operations. Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models. Open-source AI models eliminate these costs entirely. This cost reduction could enable less sophisticated actors to conduct advanced campaigns previously requiring specialized technical skills. The system's ability to generate personalized extortion messages referencing discovered files could increase psychological pressure on victims compared to generic ransom demands. The researchers conducted their work under institutional ethical guidelines within controlled laboratory environments. The published paper provides critical technical details that can help the broader cybersecurity community understand this emerging threat model and develop stronger defenses. The researchers recommend monitoring sensitive file access patterns, controlling outbound AI service connections, and developing detection capabilities specifically designed for AI-generated attack behaviors.
[5]
Large Language Models Can Execute Complete Ransomware Attacks Autonomously, NYU Tandon Research Shows | Newswise
Newswise -- Criminals can use artificial intelligence, specifically large language models, to autonomously carry out ransomware attacks that steal personal files and demand payment, handling every step from breaking into computer systems to writing threatening messages to victims, according to new research from NYU Tandon School of Engineering. The study serves as an early warning to help defenders prepare countermeasures before bad actors adopt these AI-powered techniques. A simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks -- mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes -- across personal computers, enterprise servers, and industrial control systems. This system, which the researchers call "Ransomware 3.0," became widely known recently as "PromptLock," a name chosen by cybersecurity firm ESET when experts there discovered it on VirusTotal, an online platform where security researchers test whether files can be detected as malicious. The Tandon researchers had uploaded their prototype to VirusTotal during testing procedures, and the files there appeared as functional ransomware code with no indication of their academic origin. ESET initially believed they found the first AI-powered ransomware being developed by malicious actors. While it is the first to be AI-powered, the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment. "The cybersecurity community's immediate concern when our prototype was discovered shows how seriously we must take AI-enabled threats," said Md Raz, a doctoral candidate in the Electrical and Computer Engineering Department who is the lead author on the Ransomware 3.0 paper the team published publicly. "While the initial alarm was based on an erroneous belief that our prototype was in-the-wild ransomware and not laboratory proof-of-concept research, it demonstrates that these systems are sophisticated enough to deceive security experts into thinking they're real malware from attack groups." The research methodology involved embedding written instructions within computer programs rather than traditional pre-written attack code. When activated, the malware contacts AI language models to generate Lua scripts customized for each victim's specific computer setup, using open-source models that lack the safety restrictions of commercial AI services. Each execution produces unique attack code despite identical starting prompts, creating a major challenge for cybersecurity defenses. Traditional security software relies on detecting known malware signatures or behavioral patterns, but AI-generated attacks produce variable code and execution behaviors that could evade these detection systems entirely. Testing across three representative environments showed both AI models were highly effective at system mapping and correctly flagged 63-96% of sensitive files depending on environment type. The AI-generated scripts proved cross-platform compatible, operating on (desktop/server) Windows, Linux, and (embedded) Raspberry Pi systems without modification. The economic implications reveal how AI could reshape ransomware operations. Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models. Open-source AI models eliminate these costs entirely. This cost reduction could enable less sophisticated actors to conduct advanced campaigns previously requiring specialized technical skills. The system's ability to generate personalized extortion messages referencing discovered files could increase psychological pressure on victims compared to generic ransom demands. The researchers conducted their work under institutional ethical guidelines within controlled laboratory environments. The published paper provides critical technical details that can help the broader cybersecurity community understand this emerging threat model and develop stronger defenses. The researchers recommend monitoring sensitive file access patterns, controlling outbound AI service connections, and developing detection capabilities specifically designed for AI-generated attack behaviors. The paper's senior authors are Ramesh Karri -- ECE Professor and department chair, and faculty member of Center for Advanced Technology in Telecommunications (CATT) and NYU Center for Cybersecurity -- and Farshad Khorrami -- ECE Professor and CATT faculty member. In addition to lead author Raz, the other authors include ECE Ph.D. candidate Meet Udeshi; ECE Postdoctoral Scholar Venkata Sai Charan Putrevu and ECE Senior Research Scientist Prashanth Krishnamurthy. The work was supported by grants from the Department of Energy, National Science Foundation, and from the State of New York via Empire State Development's Division of Science, Technology and Innovation.
[6]
What Does AI-Assisted Hacking Mean For Cybersecurity?
A recent threat intelligence report from Anthropic revealed the emergence of "vibe-hacking", where cybercriminals used AI coding agents like Claude Code to launch cyber attacks against at least 17 international targets. Vibe-hacking is a play on "vibe-coding" which refers to the use of generative AI programmes to write code for applications, instead of human programmers writing each line of code themselves. "The operation demonstrates a concerning evolution in AI-assisted cybercrime, where AI serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually," said the report. The hacks involved the theft of sensitive personal data, with the hackers threatening to release it unless a ransom is paid. According to the report, the threat actor provided Claude Code with their preferred operational TTPs (Tactics, Techniques, and Procedures) in their CLAUDE.md file: which the programme use as a guide to respond to prompts. The hackers used Claude Code to make both tactical and strategic decisions like determining how best to penetrate networks, which data to exfiltrate and how to craft psychologically targeted extortion demands. The attack compromised numerous personal records like healthcare data, financial information, government credentials, and other sensitive information, with some ransom demands of over $500,000. The first step was to use Claude Code for automated reconnaissance. The attackers used the AI agent to create scanning frameworks that could detect vulnerable systems across thousands of VPN endpoints. The AI then provided real-time assistance during network penetration operations, guiding them in identifying critical systems, including domain controllers and SQL (Structured Query Language) servers. The AI-powered tools extracted multiple credential sets, facilitating unauthorised access in addition to comprehensive network enumeration. The attackers also employed AI to create custom malware with evasion capabilities, designed to bypass Windows Defender detection. The AI-generated malware included obfuscated versions of the Chisel tunneling tool and novel TCP proxy code. When initial evasion attempts failed, the AI provided alternative techniques, such as string encryption and anti-debugging code, to disguise malicious executables as legitimate Microsoft tools. In the next phase, AI facilitated the systematic extraction and analysis of sensitive data from various organisations, including defence contractors, healthcare providers, and financial institutions. The stolen data encompassed social security numbers, bank account details, patient information, and documents controlled by the US State Department. AI organised the data for monetisation purposes, extracting thousands of individual records. Finally, the attackers leveraged AI to craft customised ransom notes based on the exfiltrated data's analysis. These notes included victim-specific details, such as financial figures and employee counts, along with tailored threats referencing industry-specific regulations. The attackers demanded payments ranging from $75,000 to $500,000 in Bitcoin, with AI-generated "profit plans" outlining multiple monetisation options, including organisational blackmail and targeted extortion of individuals. According to Anthropic, the case represented a cybersecurity scenario where: "These operations suggest a need for new frameworks for evaluating cyber threats that account for AI enablement. Traditional assumptions about the relationship between actor sophistication and attack complexity no longer hold when AI can provide instant expertise," said the report. Anthropic responded by banning the accounts associated with this operation and began developing a tailored classifier specifically for this type of activity and a new detection method to catch similar behaviour earlier. The company also incorporated this case into its broader set of controls, improving its detection methods and safeguards. AI is making it easier for cybercriminals to carry out attacks by compensating for lacking technical ability, said Saikat Datta, the CEO of DeepStrat: a consultancy firm working in the cybersecurity and risk management sectors. "Earlier, you needed a decent set of tools and a decent set of skills to carry out exploits," said Datta. "Now, even if you don't have a decent set of skills, AI can help you plug in that gap," he added. However, Datta pointed out that just like with vibe-coding, you still need a large amount of human guidance for vibe-hacking to work. And speaking about the potential impact of AI-assisted cybercrime on businesses, Datta stated that most Indian enterprises still lack with respect to their cybersecurity practices, doing only the bare minimum to comply with regulations. "Oftentimes, we see that official policies are either outdated or not followed in practice," he said. "Additionally, businesses often don't do their due diligence when working with third party vendors, who may not have adequate security practices," Datta added. Notably, black hat hackers - people who perform hacking activities to commit crimes - aren't the only ones using AI to break into secure networks. Sai Krishna Kothapalli, Founder of AI-assisted cybersecurity startup NoHackLabs, explained how his team has built AI agents that are able to detect vulnerabilities at a rate much faster than humans. "If you want to secure anything, you need to know what the loopholes are. Until now, in my old company, we used to manually find bugs. People like me are security researchers. We do the hacking, we understand the logic of hackers, and then we try to break it. We do that manually. There are restrictions to that. It's not scalable to tens of thousands of companies and all. But with AI agents, we can scale these things," he said. Restrictions on AI agents could prevent security researchers like Kothapalli from building helpful tools, but are needed to prevent genuine cyberattacks. Meanwhile, he believes that while some things, like creating malware, should always be restricted, there is a need for models to support research and learning within the security field. Anthropic's investigation also uncovered a UK-based ransomware development commercial operation, tracked as GTG-5004, which leveraged AI to create and sell advanced ransomware through a Ransomware-as-a-Service (RaaS) model. Active since at least January 2025, the threat actor operated across dark web forums like Dread, CryptBB, and Nulled. This operator, who lacked technical expertise, relied entirely on AI -- specifically Claude -- to develop malware that could evade security systems. They marketed multiple ransomware variants with sophisticated capabilities, including ChaCha20 encryption, anti-EDR techniques, and Windows internals exploitation. Despite limited technical knowledge, the actor successfully sold ransomware packages priced between $400 and $1,200. The actor maintained anonymity through a .onion site and a ProtonMail contact, marketing their tools while falsely claiming they were for educational use. Analysis of the malware revealed a robust system of encryption, anti-analysis, and anti-recovery mechanisms. The operation's reliance on AI eliminated traditional barriers to ransomware development, allowing a non-technical criminal to launch sophisticated cybercrime ventures. According to Anthropic, this shift is "democratisation" of cybercrime, where actors with minimal technical expertise can launch wide-ranging ransomware attacks. Anthropic also uncovered a number of other AI-assisted cybercriminals, like a sophisticated Chinese threat actor who exploited Claude to enhance cyber operations targeting critical Vietnamese infrastructure. Over a nine-month campaign, the actor integrated Claude into nearly all phases of their attack lifecycle, using it to develop custom reconnaissance tools, file upload fuzzing frameworks, and optimise credential harvesting operations. With Claude, the actor was able to compromise major telecommunications providers, government databases, and agricultural management systems in Vietnam, likely for intelligence gathering with implications for Vietnamese national security and economic interests. Their methods align with Chinese APT operations, and their targeting patterns suggest a strategic interest in Southeast Asia. Elsewhere, Anthropic thwarted a North Korean malware distribution campaign by automatically detecting and banning accounts linked to it. It also identified a Russian-speaking developer using Claude to create malware with advanced evasion techniques. India has faced a number of cyberattacks in recent years, with Indian companies often lagging behind in adequate cybersecurity protections India faced over 10 million intrusion attempts within days of the April 22 Pahalgam terrorist attack, according to a report from the Computer Emergency Response Team-Maharashtra (MH-CERT). These attacks consisted of a mix of DDoS (Distributed Denial of Service) floods, website defacements, phishing campaigns, and exploit attempts targeting the public, critical infrastructure, and defence portals. Notably, the Ministry of Electronics and Information Technology (MeitY) released India's first Digital Threat Report on April 7 this year, which highlighted that criminals were increasingly using AI-powered tools to exploit vulnerabilities in the cybersecurity systems of financial firms. The report mentioned large language models (LLMs) like FraudGPT and WormGPT, which allowed even less skilled operators to craft convincing phishing emails, generate malware, and exploit other vulnerabilities.
Share
Share
Copy Link
NYU researchers create an AI-powered ransomware prototype, initially mistaken for a real threat, highlighting potential risks and challenges in cybersecurity.
Researchers at New York University's Tandon School of Engineering have developed a proof-of-concept for AI-powered ransomware, dubbed "Ransomware 3.0," which has sent ripples through the cybersecurity community
1
2
3
. This experimental malware, initially mistaken for a real-world threat, demonstrates the potential for artificial intelligence to autonomously execute complete ransomware attacks.Source: Tom's Hardware
The research project gained unexpected attention when cybersecurity firm ESET discovered the malware on VirusTotal, a platform used for testing malicious files. ESET initially reported it as "PromptLock," believing it to be the first AI-powered ransomware found in the wild
1
3
. This misunderstanding highlights the sophistication of the NYU team's work, as it was convincing enough to be mistaken for a genuine threat by security experts.The AI system developed by the NYU team can perform all four phases of a ransomware attack:
These operations were successfully tested across personal computers, enterprise servers, and industrial control systems
4
5
. The malware uses large language models to generate customized Lua scripts for each target, making it highly adaptable and potentially more difficult to detect than traditional ransomware2
.Source: MediaNama
One of the most concerning aspects of this research is its potential economic impact on cybercrime. Traditional ransomware campaigns require significant resources, including skilled developers and custom malware creation. However, the NYU prototype demonstrated that a complete attack execution could cost as little as $0.70 using commercial API services, with open-source AI models potentially eliminating costs entirely
4
5
.The AI-generated nature of this ransomware poses significant challenges for cybersecurity defenses. Traditional security software relies on detecting known malware signatures or behavioral patterns. However, AI-generated attacks produce variable code and execution behaviors that could evade these detection systems
4
. The researchers found that their AI models were highly effective at system mapping, correctly identifying 63-96% of sensitive files depending on the environment5
.Source: The Register
Related Stories
The NYU team conducted their research under strict ethical guidelines within controlled laboratory environments. Their prototype is non-functional outside of the lab setting
3
4
. By publishing their findings, the researchers aim to provide critical technical details to help the cybersecurity community understand and prepare for this emerging threat model5
.To counter potential AI-powered ransomware threats, the researchers recommend:
4
5
As AI continues to evolve, it's clear that both cybersecurity professionals and policymakers will need to stay ahead of potential misuse by malicious actors. The NYU research serves as a crucial early warning, allowing the security community to develop countermeasures before these AI-powered techniques fall into the wrong hands.
Summarized by
Navi
[2]
[3]
[4]