2 Sources
[1]
Cybercrime Might Be the One Job AI Isn't Taking, Study Suggests - Decrypt
The biggest measurable AI-driven crime is not hacking. It is mass-produced SEO spam, romance scams, and AI-generated nudes sold for a dollar each. For three years, cybersecurity firms, governments, and AI labs have warned that generative AI would unleash a new generation of supercharged hackers. According to a new academic paper that actually went and looked, the supercharged hackers are mostly using ChatGPT to write spam and generate nudes for fun. The study, titled Stand-Alone Complex or Vibercrime?, was published on arXiv by researchers from Cambridge and other universities and aims to understand how the cybercrime underground is actually adopting AI, not how cybersecurity vendors say it is. "We present here one of the first attempts at a mixed-methods empirical study of early patterns of GenAI adoption in the cybercrime underground," researchers wrote. The team analyzed 97,895 forum threads posted after ChatGPT launched in November 2022, drawn from the Cambridge Cybercrime Centre's CrimeBB dataset of underground and dark web forums. They ran topic models, manually read more than 3,200 threads, and ethnographically immersed themselves in the scene. The conclusion is unflattering for the AI doom community: 97.3% of threads in the sample were classified as "other," meaning not actually about using AI for crime at all. Only 1.9% involved someone using vibe coding tools. Remember WormGPT, FraudGPT, and the wave of supposedly malicious chatbots that flooded headlines in 2023? The forum data tells a different story. Most posts about "Dark AI" products, the researchers found, were people begging for free access, idle speculation, and complaints that the tools didn't actually work. One developer of a popular Dark AI service eventually admitted to forum members that the product was a marketing exercise. "At the end of the day, [CybercrimeAI] is nothing more than an unrestricted ChatGPT," the developer wrote, before the project shut down. "Anyone on the Internet can use a well-known jailbreak technique and achieve the same, if not better, results." By late 2024, the researchers say, jailbreaks for mainstream models had become disposable. Most stop working in a week or less. Open-source models can be jailbroken indefinitely, but they are slow, resource-heavy, and frozen in time. "Guardrails for AI systems are proving both useful and effective," the authors conclude, in what they themselves call a counterintuitive finding for a critical paper. The paper directly addresses Anthropic's widely-covered August 2025 report claiming Claude Code had been used to run a "vibe hacking" extortion campaign against 17 organizations. The Cambridge team's data simply does not show that pattern in the wider underground. In the forums they studied, AI coding assistants are being used the same way mainstream developers use them: as autocomplete and Stack Overflow replacements for already-skilled coders. Low-skill actors stick with pre-made scripts, because pre-made scripts work. The researchers found that even hackers don't trust their vibe coded hacking tools. "AI-assisted coding is a double-edged sword. It will speed up development but also amplifies risks such as insecure code and supply chain vulnerabilities," one user said in a forum monitored by researchers. Another warned about long-term skill loss: "It's clear now that using AI for code causes a very fast negative degradation of your skills," a hacker wrote in a forum, "If your goal is just to turn out SaaS scams and you don't care about code quality/security/performance it can be viable to vibe code. (Also seems viable for phishing)." This stands in stark contrast to alarmist forecasts from Europol, which warned in 2025 that fully autonomous AI could one day control criminal networks. The disruption, when it shows up, is at the bottom of the food chain. SEO scammers are using LLMs to mass-produce blog spam to chase declining ad revenue. Romance fraudsters and eWhoring operators are bolting on voice cloning and image generation. Get-rich-quick hustlers are churning out AI-written eBooks to sell for $20 a pop. The most disturbing market the researchers found involved nude image generation services. One operator advertised: "I'm able to make any girl nude with an AI... 1 Picture = $1, 10 Pictures = $8, 50 Pictures = $40, 90 Pictures $75." None of this is sophisticated cybercrime. It is the same low-margin, high-volume hustle that powered the spam industry for two decades, now running on slightly better tools. The researchers' closing observation is the most pointed one. The biggest way AI ends up disrupting the cybercrime ecosystem, they suggest, may not be by making criminals more capable. It may be by pushing laid-off developers from legitimate tech into the underground looking for work. "In recent months anxiety over labour market disruption from these tools is increasing precipitously," the paper reads. "This may end up being the most important way in which generative AI tools disrupt the cybercrime ecosystem -- mass layoffs, economic downturn and a cool job market pushing legitimate, more skilled developers into the underground communities of get rich quick schemes, fraud, and cybercrime."
[2]
Cybercriminals gave AI a go -- and came away disappointed, study finds
New research from the University of Edinburgh found that hackers had little success in using AI tools in their work, either directly in their scams or in developing more effective tools. Cybercriminals are having a hard time incorporating artificial intelligence (AI) in their work, a new analysis found. A new pre-print study from the University of Edinburgh analysed over 100 million forum posts from cyber criminals using the database CrimeBB, which scrapes data from underground forums. The data was analysed both manually and by using a large language model (LLM). While cybercriminals have expressed interest in learning how to use AI tools, the technology has not significantly changed their way of "working," the study found. "Many of the reviews and discussions describe [AI] tools as not particularly useful," the study reads. Researchers found "no significant evidence" that hackers had any success using AI in improving their hacking activity, either as a learning aid or in developing more effective tools. AI coding assistants are mostly useful for those who are already skilled at coding, so AI models that offer coding help fail to give them any significant "bump" when trying to break into devices or find security workarounds, the study added. "You've gotta first learn the ropes of programming by yourself before you can use AI and ACTUALLY benefit from it," one post quoted by the study reads. The main impact AI has had so far on less-than-legal online activity is in easy-to-automate areas, such as social media bot creation, some romance scams and search engine optimisation (SEO) fraud, or the creation of fake websites that get pushed up in search result rankings to make money from advertising. Reviews suggest that even the most experienced hackers use chatbots to answer coding questions or generate "cheatsheets" to help them code. The AI that actually has been falls under "mainstream and legitimate products," such as Anthropic's Claude or OpenAI's Codex, rather than specific cybercrime-specific AI models such as WormGPT that hackers designed to produce malware code or phishing emails Many of the posts analysed by the study are about cyber criminals asking for techniques to bypass the security regulations on those mainstream models, but they seem to have a hard time getting the AI systems to override their safety settings. Instead, cyber criminals are forced to pivot to older, lower quality open-source AI models that are easier to jailbreak. They tend to be less useful and "require significant resources," the researchers found. Their study suggests that the guardrails put in place by AI companies are working -- so far.
Share
Copy Link
A comprehensive study analyzing over 97,895 underground forum threads found that cybercriminals are largely failing to weaponize AI for sophisticated hacking. Instead, AI and cybercrime intersect mainly in low-skill activities like SEO spam and romance scams. The research suggests AI guardrails are working better than expected.
For three years, warnings about generative AI in the cybercrime underground have dominated headlines, with predictions of supercharged hackers exploiting tools like ChatGPT. A new academic study analyzing actual underground forum activity tells a strikingly different story. Researchers from Cambridge and the University of Edinburgh examined 97,895 forum threads from the Cambridge Cybercrime Centre's CrimeBB dataset, posted after ChatGPT launched in November 2022, to understand how cybercriminals are actually using AI tools for hackers
1
2
.
Source: Euronews
The findings challenge widespread fears about AI and cybercrime. Only 1.9% of analyzed threads involved someone using AI coding tools, while 97.3% were classified as unrelated to AI adoption in cybercrime at all
1
. The research team manually reviewed more than 3,200 threads and found that most discussions about AI misuse by cybercriminals centered on complaints that the tools didn't work as promised.Remember WormGPT and FraudGPT, the supposedly malicious chatbots that generated alarm in 2023? The study found these tools were largely ineffective. Most forum posts about these products consisted of people requesting free access, speculation, and frustration that the tools failed to deliver. One developer of a popular Dark AI service eventually confessed to forum members that "at the end of the day, [CybercrimeAI] is nothing more than an unrestricted ChatGPT," before shutting down the project
1
.
Source: Decrypt
Cybercriminals attempting bypassing AI safety features face constant setbacks. By late 2024, jailbreak technique methods for mainstream models had become disposable, with most stopping working within a week or less
1
. While open-source models can be jailbroken indefinitely, they require significant resources and tend to be slower and less capable2
.The research delivers a counterintuitive conclusion: AI guardrails are proving both useful and effective against criminal exploitation. Researchers found "no significant evidence" that hackers achieved success using AI in improving their hacking activity
2
. The cybercriminals who do use Large Language Models (LLMs) rely on mainstream products from OpenAI and Anthropic rather than specialized criminal tools.AI coding assistants function the same way in criminal forums as they do for legitimate developers: as autocomplete and reference tools for already-skilled coders. Low-skill actors continue using pre-made scripts because they work better. As one forum user noted, "You've gotta first learn the ropes of programming by yourself before you can use AI and ACTUALLY benefit from it"
2
.Where generative AI in the cybercrime underground does appear, it's concentrated in high-volume, low-margin operations. SEO spam operations use LLMs to mass-produce blog content chasing declining ad revenue. Romance scams and eWhoring operators incorporate voice cloning and image generation. One disturbing market involved nude image generation services, with operators advertising: "I'm able to make any girl nude with an AI... 1 Picture = $1, 10 Pictures = $8, 50 Pictures = $40, 90 Pictures $75"
1
.Social media bot creation and phishing represent other areas where AI shows limited impact. None of this constitutes sophisticated cybercrime—it's the same low-margin hustle that powered the spam industry for decades, now running on slightly better tools
1
.Related Stories
Even within the cybercrime ecosystem, hackers voice skepticism about AI-generated malware code. "AI-assisted coding is a double-edged sword. It will speed up development but also amplifies risks such as insecure code and supply chain vulnerabilities," one user warned
1
. Another noted long-term skill degradation: "It's clear now that using AI for code causes a very fast negative degradation of your skills"1
.The researchers suggest the biggest way AI might disrupt the cybercrime ecosystem isn't by making criminals more capable—it's through labour market disruption pushing laid-off developers from legitimate tech into underground work. This indirect effect could prove more significant than direct AI misuse by cybercriminals, as anxiety over job displacement from AI tools increases
1
. The study's findings stand in stark contrast to alarmist forecasts from Europol and cybersecurity vendors, suggesting the real story of AI adoption in cybercrime is far less dramatic than predicted.Summarized by
Navi
1
Health

2
Technology

3
Policy and Regulation
