AI child abuse videos surge 260-fold as watchdog warns of escalating criminal misuse

2 Sources

Share

The Internet Watch Foundation identified 8,029 AI-generated child sexual abuse images in 2025, marking a 14% increase from the previous year. The surge in AI-generated videos—up 260-fold—shows 65% classified as the most severe category under UK law, compared to 43% for non-AI content. The findings intensify pressure on governments to update online safety laws and regulate AI companies.

AI Child Abuse Material Reaches Alarming Levels

The Internet Watch Foundation has documented a disturbing escalation in AI-generated child sexual abuse material, identifying 8,029 realistic AI depictions in 2025, representing a 14% increase from the previous year

1

. The surge in AI-generated videos proved even more dramatic, with a 260-fold increase compared to 2024, when the IWF verified just 13 such videos

1

. This sharp rise demonstrates how advances in generative AI are enabling offenders to produce large volumes of increasingly violent content with minimal technical skill, transforming the landscape of child exploitation.

Criminal Misuse of AI Creates More Extreme Content

The data reveals a troubling pattern in the severity of AI-generated material. Of the more than 3,443 AI-generated videos verified by IWF analysts, 65% were classified as category A material, the most severe legal category under UK law, which includes offences such as rape, sexual torture and bestiality

1

. In stark contrast, only 43% of non-AI criminal videos fell into this category, suggesting that AI technology is being deliberately exploited to create more violent content than traditional methods

2

. Kerry Smith, the IWF's chief executive, emphasized the gravity of the situation: "While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child's life. This material is dangerous"

1

.

Dark Web Discussions Reveal Evolving Tactics

Analysts monitoring dark web discussions uncovered conversations among offenders that reveal sophisticated plans for exploiting AI capabilities. These discussions included using hidden cameras to source footage of real children, which could then be transformed into AI-generated abuse videos

1

. One IWF analyst noted that innovations in AI technology were "regarded with delight" by users of CSAM, with conversations centering on increasingly realistic outputs and the ability to add audio to video or manipulate imagery of real children known to offenders

2

. Offenders also appeared to be competing to create more extreme AI-generated scenarios and anticipating the next generation of AI technology, such as autonomous agents currently used in enterprise settings

1

.

Pressure Mounts on Unregulated AI Companies

The crisis intensifies pressure on governments and regulators to update online safety laws and impose stricter obligations on unregulated AI companies, whose products remain largely unregulated and widely accessible

1

. The issue gained prominence in January when the Grok chatbot generated sexualized images of children that were shared on social media platform X, leading to threats of fines and bans from regulators in the EU, the UK and France

1

. In response, the UK government announced it would seek powers to "move fast" to close a legal loophole, bringing AI chatbots under the country's online safety laws alongside social media platforms such as Instagram and TikTok

1

. Smith emphasized the need for AI companies to adopt a safety-by-design approach, with guardrails to prevent misuse during the product development stage

1

.

UK Government Takes Action on AI Safety

Tech companies and child protection agencies are being given the power in the UK to test whether AI tools can produce child sexual abuse material, in a move aimed at stopping abuse before it happens

2

. Under the change, the UK government will give designated AI companies and child safety organizations permission to examine generative artificial intelligence models and ensure they have safeguards to prevent them from creating such material

2

. Last year, the government announced a ban on possessing, creating or distributing AI models designed to generate child sexual abuse material

2

. The IWF also published polling showing eight out of 10 UK adults wanted the UK government to introduce legislation ensuring AI systems were developed with safety as a priority and "future-proofed from causing harm"

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo