UK Introduces Groundbreaking Law to Test AI Models for Child Abuse Content Creation

Reviewed byNidhi Govil

3 Sources

Share

The UK government announces new legislation allowing designated organizations to test AI models for their ability to create child sexual abuse material, aiming to tackle the problem at its source as reports of AI-generated abuse content more than double.

News article

UK Pioneers Proactive AI Safety Testing

The United Kingdom has announced groundbreaking legislation that will allow designated organizations to test artificial intelligence models for their potential to create child sexual abuse material (CSAM), marking the first proactive approach of its kind globally. The new law, introduced as amendments to the Crime and Policing Bill, represents a significant shift from reactive to preventative measures in combating AI-generated abuse content

1

.

Under the proposed changes, AI developers and child protection organizations will be granted legal permission to examine AI models - the underlying technology powering chatbots like ChatGPT and image generators such as Google's Veo 3 - to ensure they contain adequate safeguards against creating illegal content. Technology Secretary Liz Kendall emphasized that this approach ensures "child safety is designed into AI systems, not bolted on as an afterthought"

2

.

Alarming Rise in AI-Generated Abuse Material

The legislation comes amid disturbing statistics from the Internet Watch Foundation (IWF), which revealed that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 cases in 2024 to 426 in 2025. The severity of this content has also intensified, with category A material - the most serious form involving penetrative sexual activity - increasing from 2,621 to 3,086 items

2

.

Particularly concerning is the demographic targeting revealed in the data: girls comprise 94% of victims in illegal AI images, while depictions of the youngest victims - newborns to two-year-olds - surged from five cases in 2024 to 92 in 2025. Kerry Smith, chief executive of the IWF, described how "AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material"

3

.

Addressing Legal Barriers to Prevention

The new legislation addresses a critical legal paradox that has hindered proactive safety measures. Previously, because creating and possessing CSAM is illegal, AI developers and safety organizations could not legally test their systems for vulnerabilities without potentially breaking the law themselves. This forced authorities to adopt a reactive approach, only addressing AI-generated abuse material after it appeared online

2

.

Kanishka Narayan, the minister for AI and online safety, explained that the move is "ultimately about stopping abuse before it happens," allowing "experts, under strict conditions, to spot the risk in AI models early." The government will establish a group of experts to ensure testing is conducted "safely and securely" by designated bodies

3

.

Broader Impact on Child Welfare

The legislation extends beyond CSAM to address other forms of AI-generated harmful content, including extreme pornography and non-consensual intimate images. Childline has reported a fourfold increase in counselling sessions mentioning AI-related harms, with 367 sessions between April and September 2025 compared to the same period the previous year. These harms include AI being used to rate children's appearance, chatbots discouraging children from seeking help from trusted adults, and online blackmail using AI-generated images

2

.

Calls for Mandatory Implementation

While child safety advocates welcome the new powers, some organizations are pushing for stronger measures. The NSPCC has called for mandatory testing requirements rather than optional provisions. Rani Govender, policy manager for child safety online at the charity, stated: "Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design"

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo