Curated by THEOUTPOST
On Fri, 18 Oct, 4:04 PM UTC
3 Sources
[1]
AI-generated child abuse images increasing at 'chilling' rate - as watchdog warns it is now becoming hard to spot
The amount of AI-generated child abuse images found on the internet is increasing at a "chilling" rate, according to a national watchdog. The Internet Watch Foundation deals with child abuse images online, removing hundreds of thousands every year. Now, it says artificial intelligence is making the work much harder. "I find it really chilling as it feels like we are at a tipping point," said "Jeff", a senior analyst at the Internet Watch Foundation (IWF), who uses a fake name at work to protect his identity. In the last six months, Jeff and his team have dealt with more AI-generated child abuse images than the preceding year, reporting a 6% increase in the amount of AI content. A lot of the AI imagery they see of children being hurt and abused is disturbingly realistic. "'Whereas before we would be able to definitely tell what is an AI image, we're reaching the point now where even a trained analyst [...] would struggle to see whether it was real or not," Jeff told Sky News. In order to make the AI images so realistic, the software is trained on existing sexual abuse images, according to the IWF. "People can be under no illusion," said Derek Ray-Hill, the IWF's interim chief executive. "AI-generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online." The IWF is warning that almost all the content was not hidden on the dark web but found on publicly available areas of the internet. "This new technology is transforming how child sexual abuse material is being produced," said Professor Clare McGlynn, a legal expert who specialises in online abuse and pornography at Durham University. Read more from Sky News: Gang sold fake vintage wine for £12,500 a bottle Budget 2024: What could Chancellor announce? Mayor bans cactus plants in buildings She told Sky News it is "easy and straightforward" now to produce AI-generated child sexual abuse images and then advertise and share them online. "Until now, it's been easy to do without worrying about the police coming to prosecute you," she said. In the last year, a number of paedophiles have been charged after creating AI child abuse images, including Neil Darlington who used AI while trying to blackmail girls into sending him explicit images. Read more: AI paedophile has 'lenient' punishment increased Creating explicit pictures of children is illegal, even if they are generated using AI, and IWF analysts work with police forces and tech providers to remove and trace images they find online. Analysts upload URLs of webpages containing AI-generated child sexual abuse images to a list which is shared with the tech industry so it can block the sites. The AI images are also given a unique code like a digital fingerprint so they can be automatically traced even if they are deleted and re-uploaded somewhere else. More than half of the AI-generated content found by the IWF in the last six months was hosted on servers in Russia and the US, with a significant amount also found in Japan and the Netherlands.
[2]
AI-generated child pornography increasing at 'chilling' rate, as watchdog warns it is now becoming hard to spot
The amount of AI-generated child pornography found on the internet is increasing at a "chilling" rate, according to a national watchdog. The Internet Watch Foundation deals with child pornography online, removing hundreds of thousands of images every year. Now, it says artificial intelligence is making the work much harder. "I find it really chilling as it feels like we are at a tipping point," said "Jeff", a senior analyst at the Internet Watch Foundation (IWF), who uses a fake name at work to protect his identity. In the last six months, Jeff and his team have dealt with more AI-generated child pornography than the preceding year, reporting a 6% increase in the amount of AI content. A lot of the AI imagery they see of children being hurt and abused is disturbingly realistic. "'Whereas before we would be able to definitely tell what is an AI image, we're reaching the point now where even a trained analyst [...] would struggle to see whether it was real or not," Jeff told Sky News. In order to make AI pornography so realistic, the software is trained on existing sexual abuse images, according to the IWF. "People can be under no illusion," said Derek Ray-Hill, the IWF's interim chief executive. "AI-generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online." The IWF is warning that almost all the content was not hidden on the dark web but found on publicly available areas of the internet. "This new technology is transforming how child sexual abuse material is being produced," said Professor Clare McGlynn, a legal expert who specialises in online abuse and pornography at Durham University. Read more from Sky News: Gang sold fake vintage wine for £12,500 a bottle Budget 2024: What could Chancellor announce? Mayor bans cactus plants in buildings She told Sky News it is "easy and straightforward" now to produce AI-generated child sexual abuse images and then advertise and share them online. "Until now, it's been easy to do without worrying about the police coming to prosecute you," she said. In the last year, a number of paedophiles have been charged after creating AI child pornography, including Neil Darlington who used AI while trying to blackmail girls into sending him explicit images. Read more: AI paedophile has 'lenient' punishment increased Creating explicit pictures of children is illegal, even if they are generated using AI, and IWF analysts work with police forces and tech providers to remove and trace images they find online. Analysts upload URLs of webpages containing AI-generated child sexual abuse images to a list which is shared with the tech industry so it can block the sites. The AI images are also given a unique code like a digital fingerprint so they can be automatically traced even if they are deleted and re-uploaded somewhere else. More than half of the AI-generated content found by the IWF in the last six months was hosted on servers in Russia and the US, with a significant amount also found in Japan and the Netherlands.
[3]
AI-generated child sexual abuse imagery reaching 'tipping point', says watchdog
Internet Watch Foundation says illegal AI-made content is becoming more prevalent on open web with high level of sophistication Child sexual abuse imagery generated by artificial intelligence tools is becoming more prevalent on the open web and reaching a "tipping point", according to a safety watchdog. The Internet Watch Foundation said the amount of AI-made illegal content it had seen online over the past six months had already exceeded the total for the previous year. The organisation, which runs a UK hotline but also has a global remit, said almost all the content was found on publicly available areas of the internet and not on the dark web, which must be accessed by specialised browsers. The IWF's interim chief executive, Derek Ray-Hill, said the level of sophistication in the images indicated that the AI tools used had been trained on images and videos of real victims. "Recent months show that this problem is not going away and is in fact getting worse," he said. According to one IWF analyst, the situation with AI-generated content was reaching a "tipping point" where safety watchdogs and authorities did not know if an image involved a real child needing help. The IWF took action against 74 reports of AI-generated child sexual abuse material (CSAM) - which was realistic enough to break UK law - in the six months to September this year, compared with 70 over the 12 months to March. One single report could refer to a webpage containing multiple images. As well as AI images featuring real-life victims of abuse, the types of material seen by the IWF included "deepfake" videos where adult pornography had been manipulated to resemble CSAM. In previous reports the IWF has said AI was being used to create images of celebrities who have been "de-aged" and then depicted as children in sexual abuse scenarios. Other examples of CSAM seen have included material for which AI tools have been used to "nudify" pictures of clothed children found online. More than half of the AI-generated content flagged by the IWF over the past six months is hosted on servers in Russia and the US, with Japan and the Netherlands also hosting significant amounts. Addresses of the webpages containing the imagery are uploaded to an IWF list of URLs which is shared with the tech industry so they can be blocked and rendered inaccessible. The IWF said eight out of 10 reports of illegal AI-made images came from members of the public who had found them on public sites such as forums or AI galleries. Meanwhile, Instagram has announced new measures to counteract sextortion, where users are tricked into sending intimate images to criminals, typically posing as young women, and then subjected to blackmail threats. The platform will roll out a feature that blurs any nude images users are sent in direct messages, and urges them to be cautious about sending any direct message (DM) that contains a nude image. Once a blurred image is received the user can choose whether or not to view it, and they will also receive a message reminding them that they have the option to block the sender and report the chat to Instagram. The feature will be turned on by default for teenagers' accounts globally from this week and can be used on encrypted messages, although images flagged by the "on device detection" feature will not be automatically notified to the platform itself or authorities. It will be an opt-in feature for adults. Instagram will also hide follower and following lists from potential sextortion scammers who are known to threaten to send intimate images to those accounts.
Share
Share
Copy Link
The Internet Watch Foundation reports a significant increase in AI-generated child abuse images, raising concerns about the evolving nature of online child exploitation and the challenges in detecting and combating this content.
The Internet Watch Foundation (IWF), a national watchdog, has reported a disturbing increase in AI-generated child abuse imagery online. Over the past six months, the organization has encountered more AI-generated content than in the entire preceding year, marking a 6% rise 12. This surge is raising serious concerns about the evolving nature of online child exploitation and the challenges in combating it.
IWF analysts are finding it increasingly difficult to distinguish between real and AI-generated images. "Jeff," a senior analyst at IWF, described the situation as "chilling," noting that even trained professionals are struggling to differentiate between authentic and AI-created content 1. This development is particularly alarming as it complicates efforts to identify and assist real victims.
The IWF has revealed that to achieve such realistic results, AI software is being trained on existing sexual abuse images 1. This practice not only perpetuates the exploitation of known victims but also creates new challenges in identifying and removing harmful content.
Contrary to expectations, the IWF warns that most of this content is not hidden on the dark web but is readily accessible on public areas of the internet 123. This accessibility raises concerns about the potential for wider exposure and normalization of such material.
Professor Clare McGlynn, a legal expert specializing in online abuse at Durham University, highlighted the ease with which AI-generated child sexual abuse images can now be produced and shared online 12. The creation of explicit images of children, even if AI-generated, remains illegal. However, the proliferation of this content poses new challenges for law enforcement and prosecution.
The IWF is implementing several measures to combat this issue:
Additionally, social media platforms like Instagram are introducing features to combat related issues such as sextortion. These include blurring nude images in direct messages and providing users with options to block senders and report chats 3.
The IWF reports that over half of the AI-generated content found in the last six months was hosted on servers in Russia and the US, with significant amounts also found in Japan and the Netherlands 123. This international spread underscores the global nature of the problem and the need for coordinated efforts to address it.
As AI technology continues to advance, the challenge of combating AI-generated child abuse imagery is likely to grow, necessitating ongoing vigilance, technological innovation, and international cooperation to protect vulnerable individuals online.
Reference
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources
9 Sources
The UK plans to introduce new laws criminalizing AI-generated child sexual abuse material, as research reveals a growing threat on dark web forums. This move aims to combat the rising use of AI in creating and distributing such content.
2 Sources
2 Sources
U.S. law enforcement agencies are cracking down on the spread of AI-generated child sexual abuse imagery, as the Justice Department and states take action to prosecute offenders and update laws to address this emerging threat.
7 Sources
7 Sources
The United Kingdom is set to become the first country to introduce laws criminalizing the use of AI tools for creating and distributing sexualized images of children, with severe penalties for offenders.
11 Sources
11 Sources
Federal prosecutors in the United States are intensifying efforts to combat the use of artificial intelligence in creating and manipulating child sex abuse images, as concerns grow about the potential flood of illicit material enabled by AI technology.
8 Sources
8 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved