2 Sources
2 Sources
[1]
AI's Child-Porn Problem Is Getting Much Worse
Thousands of abusive videos were produced last year -- that researchers know of. In 2025, new data show, the volume of child pornography online was likely larger than at any other point in history. A record 312,030 reports of confirmed child pornography were investigated last year by the Internet Watch Foundation, a U.K.-based organization that works around the globe to identify and remove such material from the web. This is concerning in and of itself. It means that the overall volume of child porn detected on the internet grew by 7 percent since 2024, when the previous record had been set. But also alarming is the tremendous increase in child porn, and in particular videos, generated by AI. At first blush, the proliferation of AI-generated depictions of child sexual abuse may leave the misimpression that no children were harmed. This is not the case. AI-generated, abusive images and videos feature and victimize real children -- either because models were trained on existing child porn, or because AI was used to manipulate real photos and videos. Today, the IWF reported that it found 3,440 AI-generated videos of child sex abuse in 2025; the year before, it found just 13. Social media, encrypted messaging, and dark-web forums have been fueling a steady rise in child-sexual-abuse material for years, and now generative AI has dramatically exacerbated the problem. Another awful record will very likely be set in 2026. Of the thousands of AI-generated videos of child sex abuse the IWF discovered in 2025, nearly two-thirds were classified as "Category A" -- the most severe category, which includes penetration, sexual torture, and bestiality. Another 30 percent were Category B, which depict nonpenetrative sexual acts. With this relatively new technology, "criminals essentially can have their own child sexual abuse machines to make whatever they want to see," Kerry Smith, the IWF's chief executive, said in a statement. Read: High school is becoming a cesspool of sexually explicit deepfakes The volume of AI-generated images of child sex abuse has been rising since at least 2023. For instance, the IWF found that over just a one-month span in early 2024, on just a single dark-web forum, users uploaded more than 3,000 AI-generated images of child sex abuse. In early 2025, the digital-safety nonprofit Thorn reported that among a sample of 700-plus U.S. teenagers it surveyed, 12 percent knew someone who had been victimized by "deepfake nudes." The proliferation of AI-generated videos depicting child sex abuse lagged behind such photos because AI video-generating tools were far less photorealistic than image generators. "When AI videos were not lifelike or sophisticated, offenders were not bothering to make them in any numbers," Josh Thomas, an IWF spokesperson, told me. That has changed. Last year, OpenAI released the Sora 2 model, Google released Veo 3, and xAI put out Grok Imagine. Meanwhile, other organizations have produced many highly advanced, open-source AI video-generating models. These open-source tools are generally free for anyone to use and have far fewer, if any, safeguards. There are almost certainly AI-generated videos and images of child sex abuse that authorities will never detect, because they are created and stored on personal computers; instead of having to find and download such material online, potentially exposing oneself to law enforcement, abusers can operate in secrecy. OpenAI, Google, Anthropic, and several other top AI labs have joined an initiative to prevent AI-enabled child sex abuse, and all of the major labs say they have measures in place to stop the use of their tools for such purposes. Still, safeguards can be broken. In the first half of 2025, OpenAI reported more than 75,000 depictions of child sex abuse or child endangerment on its platforms to the National Center for Missing & Exploited Children, more than double the number of reports from the second half of 2024. A spokesperson for OpenAI told me that the firm designs its products to prohibit creating or distributing "content that exploits or harms children" and takes "action when violations occur." The company reports all instances of child sex abuse to NCMEC and bans associated accounts. (OpenAI has a corporate partnership with The Atlantic.) The advancement and ease of use of AI video generators, in other words, offer an entry point for abuse. This dynamic became clear in recent weeks, as people used Grok, Elon Musk's AI model, to generate likely hundreds of thousands of nonconsensual sexualized images, primarily of women and children, in public on his social-media platform, X. (Musk insisted that he was "not aware of any naked underage images generated by Grok" and blamed users for making illegal requests; meanwhile, his employees quietly rolled back aspects of the tool.) While scouring the dark web, the IWF found that, in some cases, people had apparently used Grok to create abusive depictions of 11-to-13-year-old children that were then fed into more permissive tools to generate even darker, more explicit content. "Easy availability of this material will only embolden those with a sexual interest in children" and "fuel its commercialisation," Smith said in the IWF's press release. (Yesterday, the X safety team said it had restricted the ability to generate images of users in revealing clothing and that it works with law enforcement "as necessary.") Read: Elon Musk cannot get away with this There are signs that the crisis of AI-generated child sex abuse will worsen. While more and more nations, including the United Kingdom and the United States, are passing laws that make generating and publishing such material illegal, actually prosecuting criminals is slow. Silicon Valley, meanwhile, continues to move at a breakneck pace. Any number of new digital technologies have been used to harass and exploit people; the age of AI sex abuse was predictable a decade ago, yet it has begun nonetheless. AI executives, engineers, and pundits are fond of saying that today's AI models are the least effective they will ever be. By the same token, AI's ability to abuse children may only get worse from here.
[2]
AI videos of child sexual abuse surged to record highs in 2025, new report finds
Mary Cunningham is a reporter for CBS MoneyWatch. She previously worked at "60 Minutes," CBSNews.com and CBS News 24/7 as part of the CBS News Associate Program. Artificial intelligence tools are fueling the creation of online child sexual abuse material, according to a new study that documented the increase of photo-realistic AI material containing the content known as CSAM. Analysts from the U.K.-based group the Internet Watch Foundation (IWF) detected a record 3,440 AI videos of child sexual abuse last year, up from just 13 videos the year prior, a 26,362% increase. Of the AI videos they tracked, over half meet the description of what's known IWF refers to as "category A," a classification that can include the most graphic imagery and torture. IWF warns that AI technology can have harmful effects on children, whose likenesses can be used by bad actors. The rapidly developing tools can also enable people with minimal technical knowledge to make harmful videos at scale, the internet watchdog group said. "Analysts believe offenders are using the technology in greater numbers as the sophistication of AI video tools improves," the report says. The AI videos are part of a larger pool of child sexual abuse material that IWF identified and removed last year. The organization said it responded to over 300,000 reports in 2025 that included CSAM. U.S. federal law bars the production and distribution of CSAM, which the Justice Department has said is a broader phrase for child pornography. The report comes amid a backlash against Grok, an AI chatbot developed by Elon Musk's company xAI, after it allowed users to generate sexually explicit images of women and minors. In a December analysis, Copyleaks, a plagiarism and AI content-detection tool, estimated the chatbot was creating "roughly one nonconsensual sexualized image per minute." The chatbot's actions prompted action from multiple stakeholders, including the European Union, which said it is monitoring the steps X is taking to prevent the creation of inappropriate image content by Grok. On Wednesday, California Attorney General Rob Bonta announced that he was opening an investigation into xAI and Grok. Following the criticism, xAI said in a safety update posted Thursday on X that it was enacting measures to prevent users from creating photos of people in minimal clothing using Grok.
Share
Share
Copy Link
AI-generated videos depicting child sexual abuse exploded from 13 cases in 2024 to 3,440 in 2025, marking a 26,362% increase. The Internet Watch Foundation reports that over 60% qualify as the most severe category, including torture and penetration. Advanced AI video generators from OpenAI, Google, and open-source models have made it easier for abusers to create photorealistic content at scale, even as tech companies implement safeguards.

The volume of child sexual abuse material online reached historic levels in 2025, with AI-generated videos emerging as a devastating new vector for exploitation. The Internet Watch Foundation, a U.K.-based organization working globally to identify and remove such content, investigated a record 312,030 reports of confirmed CSAM last year—a 7% increase from 2024's previous record
1
. But the dramatic increase in CSAM driven by artificial intelligence represents a far more alarming trend. The organization detected 3,440 AI-generated videos of child sexual abuse in 2025, compared to just 13 the year before—a staggering 26,362% surge2
.This explosion in AI-generated child sexual abuse material doesn't mean that no children were harmed, as some might mistakenly assume. Real children remain victimized through these technologies, either because AI models were trained on existing abuse imagery or because the tools manipulated authentic photos and videos of minors
1
. The severity of the content is particularly disturbing: nearly two-thirds of the AI videos discovered fell into Category A classification—the most severe designation, which includes penetration, sexual torture, and bestiality. Another 30% were Category B, depicting nonpenetrative sexual acts1
.The proliferation of sophisticated video generators has fundamentally changed the landscape of AI-enabled child exploitation. Last year saw major releases including OpenAI's Sora 2 model, Google's Veo 3, and xAI's Grok Imagine, alongside numerous advanced open-source models
1
. These open-source models typically offer free access with minimal or nonexistent safeguards, creating dangerous entry points for abuse. "When AI videos were not lifelike or sophisticated, offenders were not bothering to make them in any numbers," Josh Thomas, an Internet Watch Foundation spokesperson, explained. That calculation has changed dramatically as the technology improved1
.The IWF warns that rapidly developing AI tools now enable people with minimal technical knowledge to create harmful videos at scale
2
. Kerry Smith, the IWF's chief executive, described the grim reality: "criminals essentially can have their own child sexual abuse machines to make whatever they want to see"1
. This capability extends beyond organized networks—abusers can now generate and store sexually explicit images on personal computers without ever exposing themselves to law enforcement by downloading material online.OpenAI, Google, Anthropic, and several other major AI labs have joined initiatives to prevent AI-enabled child exploitation, with all claiming to have protective measures in place
1
. Yet the data reveals these safeguards remain inadequate. In the first half of 2025 alone, OpenAI reported more than 75,000 depictions of child sexual abuse or child endangerment on its platforms to the National Center for Missing & Exploited Children—more than double the reports from the second half of 20241
.The problem became starkly visible when users exploited Grok, Elon Musk's AI model, to generate likely hundreds of thousands of nonconsensual sexually explicit images, primarily of women and children, publicly on X. Copyleaks, a plagiarism and AI content-detection tool, estimated the chatbot was creating roughly one nonconsensual sexualized image per minute in December
2
. While Musk claimed he was "not aware of any naked underage images generated by Grok" and blamed users for illegal requests, his employees quietly rolled back aspects of the tool1
. The incident prompted California Attorney General Rob Bonta to open an investigation into xAI and Grok, while the European Union announced monitoring of X's preventive measures2
.Related Stories
The scope of AI-generated child sexual abuse material extends far beyond what authorities can detect. IWF analysts found that over just one month in early 2024, users uploaded more than 3,000 AI-generated images of child sexual abuse on a single dark-web forum
1
. The digital-safety nonprofit Thorn reported that among 700-plus U.S. teenagers surveyed in early 2025, 12% knew someone victimized by deepfake nudes1
. Social media, encrypted messaging, and dark-web forums have fueled a steady rise in child sexual abuse for years, but generative AI has dramatically exacerbated the crisis.Law enforcement faces mounting challenges as another record will very likely be set in 2026. The AI videos identified by the IWF represent only detected cases—countless more likely exist on personal computers, created and stored in complete secrecy. U.S. federal law bars the production and distribution of CSAM, which the Justice Department describes as a broader phrase for child pornography
2
. As AI technology continues advancing at breakneck speed, the gap between capability and accountability widens, leaving vulnerable children increasingly exposed to exploitation at scales previously unimaginable.Summarized by
Navi
[1]
10 Jul 2025•Technology

22 Jul 2024

18 Oct 2024•Technology

1
Policy and Regulation

2
Technology

3
Technology
