Cybercriminals struggle with AI tools as guardrails prove effective, new study reveals

2 Sources

Share

A comprehensive study analyzing over 97,895 underground forum threads found that cybercriminals are largely failing to weaponize AI for sophisticated hacking. Instead, AI and cybercrime intersect mainly in low-skill activities like SEO spam and romance scams. The research suggests AI guardrails are working better than expected.

Cybercriminals Face Unexpected Hurdles with AI Adoption

For three years, warnings about generative AI in the cybercrime underground have dominated headlines, with predictions of supercharged hackers exploiting tools like ChatGPT. A new academic study analyzing actual underground forum activity tells a strikingly different story. Researchers from Cambridge and the University of Edinburgh examined 97,895 forum threads from the Cambridge Cybercrime Centre's CrimeBB dataset, posted after ChatGPT launched in November 2022, to understand how cybercriminals are actually using AI tools for hackers

1

2

.

Source: Euronews

Source: Euronews

The findings challenge widespread fears about AI and cybercrime. Only 1.9% of analyzed threads involved someone using AI coding tools, while 97.3% were classified as unrelated to AI adoption in cybercrime at all

1

. The research team manually reviewed more than 3,200 threads and found that most discussions about AI misuse by cybercriminals centered on complaints that the tools didn't work as promised.

Dark AI Products Turn Out to Be Marketing Exercises

Remember WormGPT and FraudGPT, the supposedly malicious chatbots that generated alarm in 2023? The study found these tools were largely ineffective. Most forum posts about these products consisted of people requesting free access, speculation, and frustration that the tools failed to deliver. One developer of a popular Dark AI service eventually confessed to forum members that "at the end of the day, [CybercrimeAI] is nothing more than an unrestricted ChatGPT," before shutting down the project

1

.

Source: Decrypt

Source: Decrypt

Cybercriminals attempting bypassing AI safety features face constant setbacks. By late 2024, jailbreak technique methods for mainstream models had become disposable, with most stopping working within a week or less

1

. While open-source models can be jailbroken indefinitely, they require significant resources and tend to be slower and less capable

2

.

AI Guardrails Prove More Effective Than Expected

The research delivers a counterintuitive conclusion: AI guardrails are proving both useful and effective against criminal exploitation. Researchers found "no significant evidence" that hackers achieved success using AI in improving their hacking activity

2

. The cybercriminals who do use Large Language Models (LLMs) rely on mainstream products from OpenAI and Anthropic rather than specialized criminal tools.

AI coding assistants function the same way in criminal forums as they do for legitimate developers: as autocomplete and reference tools for already-skilled coders. Low-skill actors continue using pre-made scripts because they work better. As one forum user noted, "You've gotta first learn the ropes of programming by yourself before you can use AI and ACTUALLY benefit from it"

2

.

Low-Skill Cybercrime Activities Dominate AI Usage

Where generative AI in the cybercrime underground does appear, it's concentrated in high-volume, low-margin operations. SEO spam operations use LLMs to mass-produce blog content chasing declining ad revenue. Romance scams and eWhoring operators incorporate voice cloning and image generation. One disturbing market involved nude image generation services, with operators advertising: "I'm able to make any girl nude with an AI... 1 Picture = $1, 10 Pictures = $8, 50 Pictures = $40, 90 Pictures $75"

1

.

Social media bot creation and phishing represent other areas where AI shows limited impact. None of this constitutes sophisticated cybercrime—it's the same low-margin hustle that powered the spam industry for decades, now running on slightly better tools

1

.

Hackers Express Concerns About AI Dependency

Even within the cybercrime ecosystem, hackers voice skepticism about AI-generated malware code. "AI-assisted coding is a double-edged sword. It will speed up development but also amplifies risks such as insecure code and supply chain vulnerabilities," one user warned

1

. Another noted long-term skill degradation: "It's clear now that using AI for code causes a very fast negative degradation of your skills"

1

.

Labour Market Disruption May Pose Greater Threat

The researchers suggest the biggest way AI might disrupt the cybercrime ecosystem isn't by making criminals more capable—it's through labour market disruption pushing laid-off developers from legitimate tech into underground work. This indirect effect could prove more significant than direct AI misuse by cybercriminals, as anxiety over job displacement from AI tools increases

1

. The study's findings stand in stark contrast to alarmist forecasts from Europol and cybersecurity vendors, suggesting the real story of AI adoption in cybercrime is far less dramatic than predicted.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved