2 Sources
2 Sources
[1]
AI child abuse images surge as watchdog warns of criminal misuse
AI-generated images and videos of child sexual abuse have reached record levels, according to new findings that underscore concerns about the criminal use of the technology. The Internet Watch Foundation, Europe's largest hotline for reporting and removing such material, said on Tuesday there had been a 260-fold increase in AI-generated child sexual abuse videos online over the past year, alongside a rise in more extreme content. The UK-based watchdog identified 8,029 realistic depictions of child sexual abuse in 2025, up 14 per cent from the previous year. The findings show how advances in generative AI are allowing offenders to produce large volumes of increasingly violent and realistic material with minimal technical skill. The trend is intensifying pressure on governments and regulators to update online safety laws and impose stricter obligations on AI companies, whose products are largely unregulated and widely used. Of the more than 3,000 AI-generated videos verified by analysts -- compared with 13 in 2024 -- 65 per cent were classified as category A, the most severe legal category under UK law, including offences such as rape, sexual torture and bestiality. In contrast, the IWF said 43 per cent of non-AI criminal videos it saw in 2025 were in this category, suggesting that AI may be used to create more violent content. "While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child's life. This material is dangerous," said Kerry Smith, the IWF's chief executive. She emphasised the need for AI companies to adopt a "safety-by-design" approach, with guardrails to prevent misuse during the product development stage. In January, the issue of AI-generated child sexual abuse material came into focus when Elon Musk's AI chatbot Grok generated sexualised images of children that were shared on social media platform X. The controversy led to threats of fines and bans from governments and regulators in the EU, the UK and France. Musk has said "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." As a result of the surge of Grok-generated sexual images, including of children, the UK government said it would seek powers to "move fast" to close a legal loophole, bringing AI chatbots under the country's online safety laws alongside social media platforms such as Instagram and TikTok. IWF analysts found that conversations on the "dark web" among offenders included discussions of using hidden cameras to source footage of real children, which could then be transformed into AI-generated abuse videos. They also appeared to be competing to create more extreme AI-generated scenarios. The report also found that paedophiles were anticipating the next generation of AI technology, such as autonomous agents, currently being used to perform tasks like coding in enterprise settings. "It is very apparent from the unsettling dark web conversation . . . that AI innovations are regarded with delight by users of child sexual abuse material," said a senior analyst who cannot be named for safety reasons. "We know this affects victims and survivors, as its creation and distribution is just as keenly felt as with traditional forms of child sexual abuse."
[2]
Amount of AI-generated child sexual abuse material found online surged in 2025
Internet Watch Foundation verified 8,029 pieces of realistic AI-made content, with 65% of videos in worst category The amount of AI-generated child sexual abuse material found online rose by 14% last year, with the majority of videos showing the most extreme type of content, according to a safety watchdog. The Internet Watch Foundation said it identified 8,029 AI-made images and videos of realistic child sexual abuse material (CSAM) in 2025. It added that there had been a more than 260-fold increase in videos. The IWF said 65% of the 3,443 videos were classified as category A, the term for the most severe material under UK law. The corresponding figure for non-AI videos was 43%, said the watchdog, showing that the technology was being used to create more violent content. Kerry Smith, the chief executive of the IWF, said: "Advances in technology should never come at the expense of a child's safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child's life. This material is dangerous." One IWF analyst said conversations between paedophiles on the dark web showed innovations in the technology were "regarded with delight" by users of CSAM. The discussions centre on AI systems' increasingly realistic outputs and, as they improve, their ability to add audio to video or successfully manipulate imagery of a real child known to an offender. The UK-based IWF operates a hotline and has a global remit to monitor child sexual abuse content. It said offenders were also discussing the possibilities for using "agentic" systems, which can carry out tasks autonomously. Tech companies and child protection agencies are being given the power in the UK to test whether AI tools can produce CSAM, in a move that ministers said last year was about stopping abuse before it happened. Under the change, the government will give designated AI companies and child safety organisations permission to examine generative artificial intelligence models - the underlying technology for chatbots such as ChatGPT and image generators such as Google's Veo 3 - and ensure they have safeguards to prevent them from creating such material. "Children, victims and survivors cannot afford for us to be complacent," said Smith. "New technology must be held to the highest standard. In some cases, lives are on the line." The amount of CSAM verified by the IWF has risen sharply as the proficiency and availability of systems have increased, with videos increasing in particular. The IWF also published polling that showed eight out of 10 UK adults wanted the UK government to introduce legislation that ensured AI systems were developed with safety as a priority and "future-proofed from causing harm". Last year, the government announced a ban on possessing, creating or distributing AI models designed to generate child sexual abuse material.
Share
Share
Copy Link
The Internet Watch Foundation identified 8,029 AI-generated child sexual abuse images in 2025, marking a 14% increase from the previous year. The surge in AI-generated videos—up 260-fold—shows 65% classified as the most severe category under UK law, compared to 43% for non-AI content. The findings intensify pressure on governments to update online safety laws and regulate AI companies.
The Internet Watch Foundation has documented a disturbing escalation in AI-generated child sexual abuse material, identifying 8,029 realistic AI depictions in 2025, representing a 14% increase from the previous year
1
. The surge in AI-generated videos proved even more dramatic, with a 260-fold increase compared to 2024, when the IWF verified just 13 such videos1
. This sharp rise demonstrates how advances in generative AI are enabling offenders to produce large volumes of increasingly violent content with minimal technical skill, transforming the landscape of child exploitation.The data reveals a troubling pattern in the severity of AI-generated material. Of the more than 3,443 AI-generated videos verified by IWF analysts, 65% were classified as category A material, the most severe legal category under UK law, which includes offences such as rape, sexual torture and bestiality
1
. In stark contrast, only 43% of non-AI criminal videos fell into this category, suggesting that AI technology is being deliberately exploited to create more violent content than traditional methods2
. Kerry Smith, the IWF's chief executive, emphasized the gravity of the situation: "While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child's life. This material is dangerous"1
.Analysts monitoring dark web discussions uncovered conversations among offenders that reveal sophisticated plans for exploiting AI capabilities. These discussions included using hidden cameras to source footage of real children, which could then be transformed into AI-generated abuse videos
1
. One IWF analyst noted that innovations in AI technology were "regarded with delight" by users of CSAM, with conversations centering on increasingly realistic outputs and the ability to add audio to video or manipulate imagery of real children known to offenders2
. Offenders also appeared to be competing to create more extreme AI-generated scenarios and anticipating the next generation of AI technology, such as autonomous agents currently used in enterprise settings1
.Related Stories
The crisis intensifies pressure on governments and regulators to update online safety laws and impose stricter obligations on unregulated AI companies, whose products remain largely unregulated and widely accessible
1
. The issue gained prominence in January when the Grok chatbot generated sexualized images of children that were shared on social media platform X, leading to threats of fines and bans from regulators in the EU, the UK and France1
. In response, the UK government announced it would seek powers to "move fast" to close a legal loophole, bringing AI chatbots under the country's online safety laws alongside social media platforms such as Instagram and TikTok1
. Smith emphasized the need for AI companies to adopt a safety-by-design approach, with guardrails to prevent misuse during the product development stage1
.Tech companies and child protection agencies are being given the power in the UK to test whether AI tools can produce child sexual abuse material, in a move aimed at stopping abuse before it happens
2
. Under the change, the UK government will give designated AI companies and child safety organizations permission to examine generative artificial intelligence models and ensure they have safeguards to prevent them from creating such material2
. Last year, the government announced a ban on possessing, creating or distributing AI models designed to generate child sexual abuse material2
. The IWF also published polling showing eight out of 10 UK adults wanted the UK government to introduce legislation ensuring AI systems were developed with safety as a priority and "future-proofed from causing harm"2
.Summarized by
Navi
10 Jul 2025•Technology

22 Jul 2024

16 Jan 2026•Policy and Regulation

1
Technology

2
Entertainment and Society

3
Policy and Regulation
