AI Videos of Child Sexual Abuse Surge to Record Highs as Advanced Generators Fuel Crisis

Reviewed byNidhi Govil

2 Sources

Share

AI-generated videos depicting child sexual abuse exploded from 13 cases in 2024 to 3,440 in 2025, marking a 26,362% increase. The Internet Watch Foundation reports that over 60% qualify as the most severe category, including torture and penetration. Advanced AI video generators from OpenAI, Google, and open-source models have made it easier for abusers to create photorealistic content at scale, even as tech companies implement safeguards.

News article

AI Videos Drive Unprecedented Surge in Child Sexual Abuse Material

The volume of child sexual abuse material online reached historic levels in 2025, with AI-generated videos emerging as a devastating new vector for exploitation. The Internet Watch Foundation, a U.K.-based organization working globally to identify and remove such content, investigated a record 312,030 reports of confirmed CSAM last year—a 7% increase from 2024's previous record

1

. But the dramatic increase in CSAM driven by artificial intelligence represents a far more alarming trend. The organization detected 3,440 AI-generated videos of child sexual abuse in 2025, compared to just 13 the year before—a staggering 26,362% surge

2

.

This explosion in AI-generated child sexual abuse material doesn't mean that no children were harmed, as some might mistakenly assume. Real children remain victimized through these technologies, either because AI models were trained on existing abuse imagery or because the tools manipulated authentic photos and videos of minors

1

. The severity of the content is particularly disturbing: nearly two-thirds of the AI videos discovered fell into Category A classification—the most severe designation, which includes penetration, sexual torture, and bestiality. Another 30% were Category B, depicting nonpenetrative sexual acts

1

.

Advanced AI Video Generators Lower Barriers to Exploitation

The proliferation of sophisticated video generators has fundamentally changed the landscape of AI-enabled child exploitation. Last year saw major releases including OpenAI's Sora 2 model, Google's Veo 3, and xAI's Grok Imagine, alongside numerous advanced open-source models

1

. These open-source models typically offer free access with minimal or nonexistent safeguards, creating dangerous entry points for abuse. "When AI videos were not lifelike or sophisticated, offenders were not bothering to make them in any numbers," Josh Thomas, an Internet Watch Foundation spokesperson, explained. That calculation has changed dramatically as the technology improved

1

.

The IWF warns that rapidly developing AI tools now enable people with minimal technical knowledge to create harmful videos at scale

2

. Kerry Smith, the IWF's chief executive, described the grim reality: "criminals essentially can have their own child sexual abuse machines to make whatever they want to see"

1

. This capability extends beyond organized networks—abusers can now generate and store sexually explicit images on personal computers without ever exposing themselves to law enforcement by downloading material online.

Safeguards Prove Insufficient Despite Industry Efforts

OpenAI, Google, Anthropic, and several other major AI labs have joined initiatives to prevent AI-enabled child exploitation, with all claiming to have protective measures in place

1

. Yet the data reveals these safeguards remain inadequate. In the first half of 2025 alone, OpenAI reported more than 75,000 depictions of child sexual abuse or child endangerment on its platforms to the National Center for Missing & Exploited Children—more than double the reports from the second half of 2024

1

.

The problem became starkly visible when users exploited Grok, Elon Musk's AI model, to generate likely hundreds of thousands of nonconsensual sexually explicit images, primarily of women and children, publicly on X. Copyleaks, a plagiarism and AI content-detection tool, estimated the chatbot was creating roughly one nonconsensual sexualized image per minute in December

2

. While Musk claimed he was "not aware of any naked underage images generated by Grok" and blamed users for illegal requests, his employees quietly rolled back aspects of the tool

1

. The incident prompted California Attorney General Rob Bonta to open an investigation into xAI and Grok, while the European Union announced monitoring of X's preventive measures

2

.

Growing Crisis Extends Beyond Dark Web Forums

The scope of AI-generated child sexual abuse material extends far beyond what authorities can detect. IWF analysts found that over just one month in early 2024, users uploaded more than 3,000 AI-generated images of child sexual abuse on a single dark-web forum

1

. The digital-safety nonprofit Thorn reported that among 700-plus U.S. teenagers surveyed in early 2025, 12% knew someone victimized by deepfake nudes

1

. Social media, encrypted messaging, and dark-web forums have fueled a steady rise in child sexual abuse for years, but generative AI has dramatically exacerbated the crisis.

Law enforcement faces mounting challenges as another record will very likely be set in 2026. The AI videos identified by the IWF represent only detected cases—countless more likely exist on personal computers, created and stored in complete secrecy. U.S. federal law bars the production and distribution of CSAM, which the Justice Department describes as a broader phrase for child pornography

2

. As AI technology continues advancing at breakneck speed, the gap between capability and accountability widens, leaving vulnerable children increasingly exposed to exploitation at scales previously unimaginable.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo