3 Sources
[1]
A.I.-Generated Images of Child Sexual Abuse Are Flooding the Internet
Cecilia Kang has covered child online safety from Washington for more than a decade. A new flood of child sexual abuse material created by artificial intelligence is hitting a tipping point of realism, threatening to overwhelm the authorities. Over the past two years, new A.I. technologies have made it easier for criminals to create explicit images and videos of children. Now, researchers at organizations including the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning of a surge of new material this year that is nearly indistinguishable from actual abuse. New data released Thursday from the Internet Watch Foundation, a British nonprofit that investigates and collects reports of child sexual abuse imagery, identified 1,286 A.I.-generated videos of child sexual abuse so far this year globally, compared with just two in the first half of 2024. The videos have become smoother and more detailed, the organization's analysts said, because of improvements in the technology and collaboration among groups on hard-to-reach parts of the internet called the dark web to produce them. The rise of lifelike videos adds to an explosion of A.I.-produced child sexual abuse material, or CSAM. In the United States, the National Center for Missing & Exploited Children said it had received 485,000 reports of A.I.-generated CSAM, including stills and videos, in the first half of the year, compared with 67,000 for all of 2024. "It's a canary in the coal mine," said Derek Ray-Hill, interim chief executive of the Internet Watch Foundation. The A.I.-generated content can contain images of real children alongside fake images, he said, adding, "There is an absolute tsunami we are seeing." The deluge of A.I. material threatens to make law enforcement's job even harder. While still a tiny fraction of the total amount of child sexual abuse material found online, which tallied reports in the millions, the police have been inundated with requests to investigate A.I.-generated images, taking away from their pursuit of those engaging in child abuse. Law enforcement authorities say federal laws against child sexual abuse material and obscenity cover A.I.-generated images, including content that is wholly created by the technology and do not contain real images of children. Beyond federal statutes, state legislators have also raced to criminalize A.I.-generated depictions of child sexual abuse, enacting more than three dozen state laws in recent years. . But courts are only just beginning to grapple with the legal implications, legal experts said. The new technology stems from generative A.I., which exploded onto the scene with OpenAI's introduction of ChatGPT in 2022. Soon after, companies introduced A.I. image and video generators, prompting law enforcement and child safety groups to warn about safety issues. Much of the new A.I. content includes real imagery of child sexual abuse that is reused in new videos and still images. Some of the material uses photos of children scraped from school websites and social media. Images are typically shared among users in forums, via messaging on social media and other online platforms. In December 2023, researchers at the Stanford Internet Observatory found hundreds of examples of child sexual abuse material in a data set used in an early version of the image generator Stable Diffusion. Stability AI, which runs Stable Diffusion, said it was not involved in the data training of the model studied by Stanford. It said an outside company had developed that version before Stability AI took over exclusive development of the image generator. Only in recent months have A.I. tools become good enough to trick the human eye with an image or video, avoiding some of the previous giveaways like too many fingers on a hand, blurry backgrounds or jerky transitions between video frames. The Internet Watch Foundation found examples last month of individuals in an underground web forum praising the latest technology, where they remarked on how realistic a new cache of A.I.-generated child sexual abuse videos were. They pointed out how the videos ran smoothly, contained detailed backgrounds with paintings on walls and furniture, and depicted multiple individuals engaged in violent and illegal acts against minors. About 35 tech companies now report A.I.-generated images of child sexual abuse to the National Center for Missing & Exploited Children, said John Shehan, a senior official with the group, although some are uneven in their approach. The companies filing the most reports typically are more proactive in finding and reporting images of child sexual abuse, he said. Amazon, which offers A.I. tools via its cloud computing service, reported 380,000 incidents of A.I.-generated child sexual abuse material in the first half of the year, which it took down. OpenAI reported 75,000 cases. Stability AI reported under 30. Stability AI said it had introduced safeguards to enhance its safety standards and "is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM." Amazon and OpenAI, when asked to comment, pointed to reports they posted online that explained their efforts to detect and report child sexual abuse material. Some criminal networks are using A.I. to create sexually explicit images of minors and then blackmail the children, said a Department of Justice official, who requested anonymity to discuss private investigations. Other children use apps that take images of real people and disrobe them, creating what is known as a deepfake nude. Although sexual abuse images containing real children are clearly illegal, the law is still evolving on materials generated fully by artificial intelligence, some legal scholars said. In March, a Wisconsin man who was accused by the Justice Department of illegally creating, distributing and possessing fully synthetic images of child sexual abuse successfully challenged one of the charges against him on First Amendment grounds. Judge James Peterson of the U.S. District Court for the Western District of Wisconsin said that "the First Amendment generally protects the right to possess obscene material in the home" so long as it isn't "actual child pornography." But the trial will move forward on the other charges, which relate to the production and distribution of 13,000 images created with an image generator. The man tried to share images with a minor on Instagram, which reported him, according to federal prosecutors. "The Department of Justice views all forms of A.I.-generated CSAM as a serious and emerging threat," said Matt Galeotti, head of the Justice Department's criminal division.
[2]
AI-generated child sexual abuse videos surging online, watchdog says
Internet Watch Foundation verified 1,286 AI-made videos in first half of year, mostly in worst category of abuse The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology. The Internet Watch Foundation said AI videos of abuse had "crossed the threshold" of being near-indistinguishable from "real imagery" and had sharply increased in prevalence online this year. In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year. The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material. The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles. "It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators," said one IWF analyst. The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content. The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for "something new and better to come along". IWF analysts said the images appeared to have been created by taking a freely available basic AI model and "fine-tuning" it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said. The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said. Derek Ray-Hill, the IWF's interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online. "There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web," he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery. The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added. The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content. People found to have breached the new law will face up to five years in jail. Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years. Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that "we tackle child sexual abuse online as well as offline". AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an "indecent photograph or pseudo photograph" of a child.
[3]
AI-generated child abuse webpages surge 400%, alarming watchdog
Reports of child sexual abuse imagery created using artificial intelligence tools have surged 400% in the first half of 2025, according to new data from the U.K.-based nonprofit organization Internet Watch Foundation. The organization, which monitors child sexual abuse material online, recorded 210 webpages containing AI-generated material in the first six months of 2025, up from 42 in the same period the year before, according to a report published this week. On those pages were 1,286 videos, up from just two in 2024. The majority of this content was so realistic it had to be treated under U.K. law as if it were actual footage, the IWF said. Roughly 78% of the videos -- 1,006 in total -- were classified as "Category A," the most severe level, which can include depictions of rape, sexual torture and bestiality, the IWF said. Most of the videos involved girls and in some cases used the likenesses of real children.
Share
Copy Link
A dramatic increase in AI-generated child sexual abuse material (CSAM) is overwhelming authorities and raising concerns about the misuse of AI technology. Law enforcement and child safety organizations are grappling with the legal and ethical implications of this surge.
The Internet Watch Foundation (IWF) has reported a staggering 400% increase in AI-generated child sexual abuse material (CSAM) in the first half of 2025. The organization verified 1,286 AI-made videos containing illegal content, compared to just two in the same period last year 1. This surge has raised serious concerns among child safety organizations and law enforcement agencies worldwide.
Source: The Japan Times
The rapid advancement of AI technology has led to a significant improvement in the quality and realism of generated content. Derek Ray-Hill, interim chief executive of the IWF, described the situation as "a canary in the coal mine," warning of an "absolute tsunami" of AI-generated CSAM 2. The videos have become smoother and more detailed, making them nearly indistinguishable from actual abuse footage.
Alarmingly, approximately 78% of the AI-generated videos (1,006 in total) were classified as "Category A," the most severe level of abuse content 3. This category includes depictions of rape, sexual torture, and bestiality. The majority of these videos involved girls, and in some cases, used the likenesses of real children.
The deluge of AI-generated material is posing significant challenges for law enforcement agencies. While still a small fraction of the total CSAM found online, the influx of AI-generated content is diverting resources from investigations into actual child abuse cases. John Shehan, a senior official with the National Center for Missing & Exploited Children, reported that they received 485,000 reports of AI-generated CSAM in the first half of 2025, compared to 67,000 for all of 2024 2.
Source: The New York Times
Governments and legislators are scrambling to address the legal implications of AI-generated CSAM. In the United States, federal laws against child sexual abuse material and obscenity are being applied to AI-generated images. More than three dozen state laws have been enacted to criminalize AI depictions of child sexual abuse 2.
In the UK, the government has introduced new legislation making it illegal to possess, create, or distribute AI tools designed to create abuse content. Offenders could face up to five years in jail. Additionally, possession of manuals teaching the use of AI tools for creating abusive imagery or facilitating child abuse has been outlawed, with potential prison sentences of up to three years 1.
Major tech companies have begun reporting AI-generated CSAM to authorities. Amazon reported 380,000 incidents in the first half of 2025, while OpenAI reported 75,000 cases. Stability AI, which runs the image generator Stable Diffusion, reported under 30 cases and stated its commitment to preventing misuse of its technology 2.
The rapid growth in AI-generated CSAM presents a significant threat to online safety and child protection efforts. Ray-Hill warned of the potential for an "absolute explosion that overwhelms the clear web," which could fuel criminal activities linked to child trafficking, sexual abuse, and modern slavery 1. As AI technology continues to advance, the challenge of combating this issue is likely to intensify, requiring ongoing collaboration between tech companies, law enforcement agencies, and policymakers.
Goldman Sachs is testing Devin, an AI software engineer developed by Cognition, potentially deploying thousands of instances to augment its human workforce. This move signals a significant shift towards AI adoption in the financial sector.
5 Sources
Technology
14 hrs ago
5 Sources
Technology
14 hrs ago
RealSense, Intel's depth-sensing camera technology division, has spun out as an independent company, securing $50 million in Series A funding to scale its 3D perception technology for robotics, AI, and computer vision applications.
13 Sources
Technology
14 hrs ago
13 Sources
Technology
14 hrs ago
AI adoption is rapidly increasing across businesses and consumers, with tech giants already looking beyond AGI to superintelligence, suggesting the AI revolution may be further along than publicly known.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Elon Musk's artificial intelligence company xAI is preparing for a new funding round that could value the company at up to $200 billion, marking a significant increase from its previous valuation and positioning it as one of the world's most valuable private companies.
3 Sources
Business and Economy
14 hrs ago
3 Sources
Business and Economy
14 hrs ago
The United Nations' International Telecommunication Union urges companies to implement advanced tools for detecting and eliminating AI-generated misinformation and deepfakes to counter risks of election interference and financial fraud.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago