3 Sources
3 Sources
[1]
UK seeks to curb AI child sex abuse imagery with tougher testing
"Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design." The government said its proposed changes to the law would also equip AI developers and charities to make sure AI models have adequate safeguards around extreme pornography and non-consensual intimate images. Child safety experts and organisations have frequently warned AI tools developed, in part, using huge volumes of wide-ranging online content are being used to create highly realistic abuse imagery of children or non-consenting adults. Some, including the IWF and child safety charity Thorn, have said these risk jeopardising efforts to police such material by making it difficult to identify whether such content is real or AI-generated. Researchers have suggested there is growing demand for these images online, particularly on the dark web, and that some are being created by children. Earlier this year, the Home Office said the UK would be the first country in the world to make it illegal to possess, create or distribute AI tools designed to create child sexual abuse material (CSAM), with a punishment of up to five years in prison. Ms Kendall said on Wednesday that "by empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought". "We will not allow technological advancement to outpace our ability to keep children safe," she said. Safeguarding minister Jess Phillips said the measures would also "mean legitimate AI tools cannot be manipulated into creating vile material and more children will be protected from predators as a result".
[2]
Tech companies and UK child safety agencies to test AI tools' ability to create abuse images
New law will allow technology to be examined and ensure tools have safeguards to stop creation of material Tech companies and child protection agencies will be given the power to test whether artificial intelligence tools can produce child abuse images under a new UK law. The announcement was made as a safety watchdog revealed that reports of AI-generated child sexual abuse material [CSAM] have more than doubled in the past year from 199 in 2024 to 426 in 2025. Under the change, the government will give designated AI companies and child safety organisations permission to examine AI models - the underlying technology for chatbots such as ChatGPT and image generators such as Google's Veo 3 - and ensure they have safeguards to prevent them from creating images of child sexual abuse. Kanishka Narayan, the minister for AI and online safety, said the move was "ultimately about stopping abuse before it happens", adding: "Experts, under strict conditions, can now spot the risk in AI models early." The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Until now, the authorities have had to wait until AI-generated CSAM is uploaded online before dealing with it. This law is aimed at heading off that problem by helping to prevent the creation of those images at source. The changes are being introduced by the government as amendments to the crime and policing bill, legislation which is also introducing a ban on possessing, creating or distributing AI models designed to generate child sexual abuse material. This week Narayan visited the London base of Childline, a helpline for children, and listened to a mock-up of a call to counsellors featuring a report of AI-based abuse. The call portrayed a teenager seeking help after he had been blackmailed by a sexualised deepfake of himself, constructed using AI. "When I hear about children experiencing blackmail online, it is a source of extreme anger in me and rightful anger amongst parents," he said. The Internet Watch Foundation, which monitors CSAM online, said reports of AI-generated abuse material - such as a webpage that may contain multiple images - had more than doubled so far this year. Instances of category A material - the most serious form of abuse - rose from 2,621 images or videos to 3,086. Girls were overwhelmingly targeted, making up 94% of illegal AI images in 2025, while depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025. Kerry Smith, the chief executive of the Internet Watch Foundation, said the law change could "a vital step to make sure AI products are safe before they are released". "AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material," she said. "Material which further commodifies victims' suffering, and makes children, particularly girls, less safe on and off line." Childline also released details of counselling sessions where AI has been mentioned. AI harms mentioned in the conversations include: using AI to rate weight, body and looks; chatbots dissuading children from talking to safe adults about abuse; being bullied online with AI-generated content; and online blackmail using AI-faked images. Between April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and related terms were mentioned, four times as many as in the same period last year. Half of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including using chatbots for support and AI therapy apps.
[3]
New law could help tackle AI-generated child abuse at source, says watchdog
Groups tackling AI-generated child sexual abuse material could be given more powers to protect children online under a proposed new law. Organisations like the Internet Watch Foundation (IWF), as well as AI developers themselves, will be able to test the ability of AI models to create such content without breaking the law. That would mean they could tackle the problem at the source, rather than having to wait for illegal content to appear before they deal with it, according to Kerry Smith, chief executive of the IWF. The IWF deals with child abuse images online, removing hundreds of thousands every year. Ms Smith called the proposed law a "vital step to make sure AI products are safe before they are released". How would the law work? The changes are due to be tabled today as an amendment to the Crime and Policing Bill. The government said designated bodies could include AI developers and child protection organisations, and it will bring in a group of experts to ensure testing is carried out "safely and securely". The new rules would also mean AI models can be checked to make sure they don't produce extreme pornography or non-consensual intimate images. "These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk," said Technology Secretary Liz Kendall. "By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought." AI abuse material on the rise The announcement came as new data was published by the IWF showing reports of AI-generated child sexual abuse material have more than doubled in the past year. According to the data, the severity of material has intensified over that time. The most serious category A content - images involving penetrative sexual activity, sexual activity with an animal, or sadism - has risen from 2,621 to 3,086 items, accounting for 56% of all illegal material, compared with 41% last year. Read more from Sky News: Protesters storm COP30 UK stops some intel sharing with US The data showed girls have been most commonly targeted, accounting for 94% of illegal AI images in 2025. The NSPCC called for the new laws to go further and make this kind of testing compulsory for AI companies. "It's encouraging to see new legislation that pushes the AI industry to take greater responsibility for scrutinising their models and preventing the creation of child sexual abuse material on their platforms," said Rani Govender, policy manager for child safety online at the charity. "But to make a real difference for children, this cannot be optional. "Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design."
Share
Share
Copy Link
The UK government announces new legislation allowing designated organizations to test AI models for their ability to create child sexual abuse material, aiming to tackle the problem at its source as reports of AI-generated abuse content more than double.

The United Kingdom has announced groundbreaking legislation that will allow designated organizations to test artificial intelligence models for their potential to create child sexual abuse material (CSAM), marking the first proactive approach of its kind globally. The new law, introduced as amendments to the Crime and Policing Bill, represents a significant shift from reactive to preventative measures in combating AI-generated abuse content
1
.Under the proposed changes, AI developers and child protection organizations will be granted legal permission to examine AI models - the underlying technology powering chatbots like ChatGPT and image generators such as Google's Veo 3 - to ensure they contain adequate safeguards against creating illegal content. Technology Secretary Liz Kendall emphasized that this approach ensures "child safety is designed into AI systems, not bolted on as an afterthought"
2
.The legislation comes amid disturbing statistics from the Internet Watch Foundation (IWF), which revealed that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 cases in 2024 to 426 in 2025. The severity of this content has also intensified, with category A material - the most serious form involving penetrative sexual activity - increasing from 2,621 to 3,086 items
2
.Particularly concerning is the demographic targeting revealed in the data: girls comprise 94% of victims in illegal AI images, while depictions of the youngest victims - newborns to two-year-olds - surged from five cases in 2024 to 92 in 2025. Kerry Smith, chief executive of the IWF, described how "AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material"
3
.The new legislation addresses a critical legal paradox that has hindered proactive safety measures. Previously, because creating and possessing CSAM is illegal, AI developers and safety organizations could not legally test their systems for vulnerabilities without potentially breaking the law themselves. This forced authorities to adopt a reactive approach, only addressing AI-generated abuse material after it appeared online
2
.Kanishka Narayan, the minister for AI and online safety, explained that the move is "ultimately about stopping abuse before it happens," allowing "experts, under strict conditions, to spot the risk in AI models early." The government will establish a group of experts to ensure testing is conducted "safely and securely" by designated bodies
3
.Related Stories
The legislation extends beyond CSAM to address other forms of AI-generated harmful content, including extreme pornography and non-consensual intimate images. Childline has reported a fourfold increase in counselling sessions mentioning AI-related harms, with 367 sessions between April and September 2025 compared to the same period the previous year. These harms include AI being used to rate children's appearance, chatbots discouraging children from seeking help from trusted adults, and online blackmail using AI-generated images
2
.While child safety advocates welcome the new powers, some organizations are pushing for stronger measures. The NSPCC has called for mandatory testing requirements rather than optional provisions. Rani Govender, policy manager for child safety online at the charity, stated: "Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design"
1
.Summarized by
Navi
[2]
02 Feb 2025•Policy and Regulation

21 Feb 2025•Policy and Regulation

18 Oct 2024•Technology
