3 Sources
3 Sources
[1]
AI child abuse images surge as watchdog warns of criminal misuse
AI-generated images and videos of child sexual abuse have reached record levels, according to new findings that underscore concerns about the criminal use of the technology. The Internet Watch Foundation, Europe's largest hotline for reporting and removing such material, said on Tuesday there had been a 260-fold increase in AI-generated child sexual abuse videos online over the past year, alongside a rise in more extreme content. The UK-based watchdog identified 8,029 realistic depictions of child sexual abuse in 2025, up 14 per cent from the previous year. The findings show how advances in generative AI are allowing offenders to produce large volumes of increasingly violent and realistic material with minimal technical skill. The trend is intensifying pressure on governments and regulators to update online safety laws and impose stricter obligations on AI companies, whose products are largely unregulated and widely used. Of the more than 3,000 AI-generated videos verified by analysts -- compared with 13 in 2024 -- 65 per cent were classified as category A, the most severe legal category under UK law, including offences such as rape, sexual torture and bestiality. In contrast, the IWF said 43 per cent of non-AI criminal videos it saw in 2025 were in this category, suggesting that AI may be used to create more violent content. "While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child's life. This material is dangerous," said Kerry Smith, the IWF's chief executive. She emphasised the need for AI companies to adopt a "safety-by-design" approach, with guardrails to prevent misuse during the product development stage. In January, the issue of AI-generated child sexual abuse material came into focus when Elon Musk's AI chatbot Grok generated sexualised images of children that were shared on social media platform X. The controversy led to threats of fines and bans from governments and regulators in the EU, the UK and France. Musk has said "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." As a result of the surge of Grok-generated sexual images, including of children, the UK government said it would seek powers to "move fast" to close a legal loophole, bringing AI chatbots under the country's online safety laws alongside social media platforms such as Instagram and TikTok. IWF analysts found that conversations on the "dark web" among offenders included discussions of using hidden cameras to source footage of real children, which could then be transformed into AI-generated abuse videos. They also appeared to be competing to create more extreme AI-generated scenarios. The report also found that paedophiles were anticipating the next generation of AI technology, such as autonomous agents, currently being used to perform tasks like coding in enterprise settings. "It is very apparent from the unsettling dark web conversation . . . that AI innovations are regarded with delight by users of child sexual abuse material," said a senior analyst who cannot be named for safety reasons. "We know this affects victims and survivors, as its creation and distribution is just as keenly felt as with traditional forms of child sexual abuse."
[2]
Amount of AI-generated child sexual abuse material found online surged in 2025
Internet Watch Foundation verified 8,029 pieces of realistic AI-made content, with 65% of videos in worst category The amount of AI-generated child sexual abuse material found online rose by 14% last year, with the majority of videos showing the most extreme type of content, according to a safety watchdog. The Internet Watch Foundation said it identified 8,029 AI-made images and videos of realistic child sexual abuse material (CSAM) in 2025. It added that there had been a more than 260-fold increase in videos. The IWF said 65% of the 3,443 videos were classified as category A, the term for the most severe material under UK law. The corresponding figure for non-AI videos was 43%, said the watchdog, showing that the technology was being used to create more violent content. Kerry Smith, the chief executive of the IWF, said: "Advances in technology should never come at the expense of a child's safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child's life. This material is dangerous." One IWF analyst said conversations between paedophiles on the dark web showed innovations in the technology were "regarded with delight" by users of CSAM. The discussions centre on AI systems' increasingly realistic outputs and, as they improve, their ability to add audio to video or successfully manipulate imagery of a real child known to an offender. The UK-based IWF operates a hotline and has a global remit to monitor child sexual abuse content. It said offenders were also discussing the possibilities for using "agentic" systems, which can carry out tasks autonomously. Tech companies and child protection agencies are being given the power in the UK to test whether AI tools can produce CSAM, in a move that ministers said last year was about stopping abuse before it happened. Under the change, the government will give designated AI companies and child safety organisations permission to examine generative artificial intelligence models - the underlying technology for chatbots such as ChatGPT and image generators such as Google's Veo 3 - and ensure they have safeguards to prevent them from creating such material. "Children, victims and survivors cannot afford for us to be complacent," said Smith. "New technology must be held to the highest standard. In some cases, lives are on the line." The amount of CSAM verified by the IWF has risen sharply as the proficiency and availability of systems have increased, with videos increasing in particular. The IWF also published polling that showed eight out of 10 UK adults wanted the UK government to introduce legislation that ensured AI systems were developed with safety as a priority and "future-proofed from causing harm". Last year, the government announced a ban on possessing, creating or distributing AI models designed to generate child sexual abuse material.
[3]
AI boosted one of the worst forms of abusive content on the internet
While the progress of AI has brought a wave of useful tools, it's also amplifying some of the darkest corners of the internet. A new report from the IWF (Internet Watch Foundation) revealed a sharp increase in AI-generated child sexual abuse material (CSAM) online in 2025. It highlights how generative AI is being misused at scale. This isn't just a small jump either; it's proof of how this kind of content is being created and distributed. Why this news is alarming According to the IWF, over 8,000 AI-generated images and videos of abusive content were identified in 2025, which marked a 14% increase year on year. But what's more concerning is the rise of video content. The report stated there's an over 260-fold increase in AI-generated videos, many of which fall into the most severe category of abuse. Recommended Videos In fact, around 65% of the videos analyzed were classified as the most extreme type. This underscores just how serious the problem has become. How AI is lowering the barrier to harmful content The biggest change isn't even the volume -- it's the accessibility. Experts say generative AI tools are making it significantly easier to create realistic abuse material. Some of these systems can generate lifelike images and videos, manipulate existing photos, produce content at scale with minimal effort. That combination allow bad actors to create and distribute harmful material faster and more cheaply than ever before. In the past, dangerous content such as this was typically associated with the dark web. But the report highlighted how much of AI-generated material is now being found on the open web, rather than being limited to hidden corners of the internet. This makes detection harder, moderation more complex, and increases the risk of exposure. Why there's no easy fix AI-generated content introduces a new layer of difficulty for law enforcement and platforms. Since the material can be entirely synthetic or derived from real images, tracing the origin or identifying the victims becomes much harder. Removing this type of content is another hurdle. With the rapid evolution of AI tools ,safeguards are often play catch-up.
Share
Share
Copy Link
The Internet Watch Foundation identified 8,029 AI-generated child sexual abuse images and videos in 2025, marking a 14% annual increase. More alarming is the 260-fold surge in videos, with 65% classified as the most extreme category. The findings expose how generative AI tools enable bad actors to create violent, realistic abusive content with minimal technical skill, intensifying pressure on governments to regulate unregulated AI companies.
The Internet Watch Foundation, Europe's largest hotline for reporting child sexual abuse material, has documented a disturbing escalation in AI-generated child sexual abuse images and videos. The IWF identified 8,029 realistic depictions of CSAM in 2025, representing a 14% increase from the previous year
1
. This surge in abusive content demonstrates how advances in generative AI are enabling offenders to produce large volumes of increasingly violent material with minimal technical expertise.The most striking finding reveals a 260-fold increase in AI-generated videos compared to just 13 verified in 2024
1
. Of the more than 3,443 videos analyzed by IWF specialists, 65% were classified as Category A, the most severe legal category under UK law, encompassing offences such as rape, sexual torture and bestiality2
. In stark contrast, only 43% of non-AI criminal videos fell into this category, suggesting that generative AI tools are being deliberately weaponized to create more extreme and realistic abusive content1
.What makes this development particularly concerning is how accessible these tools have become. Experts note that AI chatbots and image generators are lowering barriers to harmful content creation, allowing bad actors to generate lifelike images and videos, manipulate existing photos, and produce material at scale with minimal effort
3
. Much of this AI-generated material is now appearing on the open web rather than being confined to hidden corners of the internet, making content moderation significantly more complex and increasing exposure risks3
.Dark web discussions among offenders reveal an even more troubling picture. IWF analysts discovered conversations about using hidden cameras to source footage of real children, which could then be transformed into AI-generated abuse videos
1
. These bad actors appear to be competing to create more extreme scenarios and are anticipating the next generation of AI technology, including autonomous agents currently used for tasks like coding in enterprise settings1
.Kerry Smith, the IWF's chief executive, emphasized the urgent need for action: "While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child's life. This material is dangerous"
1
. She called for AI companies to adopt a safety-by-design approach, implementing guardrails to prevent misuse during the product development stage1
.The issue gained international attention in January when the Grok chatbot, developed by Elon Musk's company, generated sexualized images of children that were shared on social media platform X. The controversy triggered threats of fines and bans from governments and regulators in the EU, the UK and France
1
. In response, the UK government announced it would seek powers to close legal loopholes by bringing AI chatbots under the country's online safety laws alongside social media platforms like Instagram and TikTok1
.Related Stories
The surge is intensifying pressure on governments and regulators to update online safety laws and impose stricter obligations on unregulated AI companies, whose products remain largely unregulated and widely accessible
1
. Tech companies and child protection agencies in the UK are now being given power to test whether AI tools can produce child sexual abuse material, in what ministers describe as an effort to stop abuse before it happens2
. The UK government has also announced a ban on possessing, creating or distributing AI models designed to generate CSAM2
.Law enforcement faces unprecedented challenges as AI-generated content can be entirely synthetic or derived from real images, making it harder to trace origins or identify victims
3
. With the rapid evolution of generative AI, safeguards often play catch-up. IWF polling revealed that eight out of 10 UK adults want legislation ensuring AI systems are developed with online safety as a priority and future-proofed from causing harm2
. As one IWF analyst noted, "AI innovations are regarded with delight by users of child sexual abuse material," underscoring the urgency for comprehensive regulatory frameworks1
.Summarized by
Navi
[3]
10 Jul 2025•Technology

22 Jul 2024

16 Jan 2026•Policy and Regulation

1
Policy and Regulation

2
Policy and Regulation

3
Technology
