Curated by THEOUTPOST
On Tue, 29 Apr, 12:02 AM UTC
4 Sources
[1]
UK regulator wants to ban apps that can make deepfake nude images of children
The UK's Children's Commissioner is calling for a ban on AI deepfake apps that create nude or sexual images of children, according to a new report. It states that such "nudification" apps have become so prevalent that many girls have stopped posting photos on social media. And though creating or uploading CSAM images is illegal, apps used to create deepfake nude images are still legal. "Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone -- a stranger, a classmate, or even a friend -- could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps." said Children's Commissioner Dame Rachel de Souza. "There is no positive reason for these [apps] to exist." De Souza pointed out that nudification AI apps are widely available on mainstream platforms, including the largest search engines and app stores. At the same time, they "disproportionately target girls and young women, and many tools appear only to work on female bodies." She added that young people are demanding action to take action against the misuse of such tools. To that end, de Souza is calling on the government to introduce a total ban on apps that use artificial intelligence to generate sexually explicit deepfakes. She also wants the government to create legal responsibilities for GenAI app developers to identify the risks their products pose to children, establish effective systems to remove CSAM from the internet and recognize deepfake sexual abuse as a form of violence against women and girls. The UK has already taken steps to ban such technology by introducing new criminal offenses for producing or sharing sexually explicit deepfakes. It also announced its intention to make it a criminal offense if a person takes intimate photos or video without consent. However, the Children's Commissioner is focused more specifically on the harm such technology can do to young people, noting that there is a link between deepfake abuse and suicidal ideation and PTSD, as The Guardian pointed out. "Even before any controversy came out, I could already tell what it was going to be used for, and it was not going to be good things. I could already tell it was gonna be a technological wonder that's going to be abused," said one 16-year-old girl surveyed by the Commissioner.
[2]
Ban AI apps creating naked images of children, says children's commissioner
A government spokesperson said child sexual abuse material was illegal and that it had introduced further offences for creating, possessing or distributing AI tools designed to create such content. Deepfakes are videos, pictures or audio clips made with AI to look or sound real. In a report published on Monday, Dame Rachel said the technology is disproportionately targeting girls and young women with many bespoke apps appearing to only work on female bodies. She said she also found children were changing their behaviour online to avoid becoming a victim of nudification apps. "They fear that anyone - a stranger, a classmate, or even a friend - could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps," she said. "Girls have told me they now actively avoid posting images or engaging online to reduce the risk of being targeted by this technology. "We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children's lives." Dame Rachel also called for the government to: Paul Whiteman, general secretary of school leaders' union NAHT, said members shared the commissioner's concerns. "This is an area that urgently needs to be reviewed as the technology risks outpacing the law and education around it," he told PA. It is illegal in England and Wales under the Online Safety Act to share or threaten to share explicit deepfake images. The government announced in February laws to tackle the threat of child sexual abuse images being generated by AI, which include making it illegal to possess, create, or distribute AI tools designed to create such material. It said at the time that the Internet Watch Foundation - a UK-based charity partly funded by tech firms - had confirmed 245 reports of AI-generated child sexual abuse in 2024 compared with 51 in 2023 - a 380% increase. Media regulator Ofcom published the final version of its Children's Code on Friday, which puts legal requirements on platforms hosting pornography and content encouraging self-harm, suicide or eating disorders, to take more action to prevent access by children. Websites must introduce beefed-up age checks or face big fines, the regulator said. Dame Rachel has criticised the code saying it prioritises "business interests of technology companies over children's safety". A government spokesperson said creating, possessing or distributing child sexual abuse material, including AI-generated images, is "abhorrent and illegal". "Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines," they added. "The UK is the first country in the world to introduce further AI child sexual abuse offences - making it illegal to possess, create or distribute AI tools designed to generate heinous child sex abuse material."
[3]
UK Children's Commissioner Calls for Ban on AI 'Nudification' Apps
The Children's Commissioner, a United Kingdom regulator tasked with promoting and protecting the rights of children in the U.K., issued a report calling for the immediate ban of artificial intelligence (AI) apps that enable "deepfake sexual abuse of children." AI-generated "deepfakes," which are synthetic images that realistically recreate a real person in an unreal scenario, have been an increasing problem in the age of AI. Many people have been victimized by AI deepfakes, including celebrities, and, more concerning, children. Various deepfake scandals have rocked schools, including a situation last fall in Pennsylvania that ultimately forced a school to close temporarily. The Children's Commissioner's report cites children who told regulators they are afraid of becoming deepfake victims, and some kids are deeply worried that someone, whether a classmate, friend, or total stranger, could create sexualized deepfake images of them. "Girls have told me they now actively avoid posting images or engaging online to reduce the risk of being targeted by this technology," says Children's Commissioner Dame Rachel de Souza. "We cannot allow sit back and allow these bespoke AI apps to have such a dangerous hold over children's lives." "The online world is revolutionary and quickly evolving, but there is no positive reason for these particular apps to exist. They have no place in our society. Tools using deepfake technology to create naked images of children should not be legal and I'm calling on the government to take decisive action to ban them, instead of allowing them to go unchecked with extreme real-world consequences," de Souza continues. Dame de Souza is immediately calling on the U.K. government to introduce a complete and total ban on all apps that use AI to generate sexually explicit "deepfake" images of children. While the U.K. already criminalized the creation of sexual deepfakes in 2024, and made it illegal to possess or distribute some sexually explicit deepfake material earlier this year, de Souza notes that offending "nudification" apps remain readily available on major search engines and app platforms. "While it is illegal to create or share a sexually explicit image of a child, the technology enabling them remains legal -- and it is no longer confined to corners of the dark web but now accessible through large social media platforms and search engines," de Souza says. The commissioner warns that the impacts of being a deepfake victim are significant, especially for children. The report claims that there is a link between deepfake abuse and suicidal ideation. She says that urgent action is required, including banning nudification apps altogether, creating legal responsibilities for app developers, ensuring that it is much easier to remove sexually explicit deepfake images of children from the internet, and formally recognizing deepfake sexual abuse material as a form of violence against women and girls. The Children's Commissioner wants the law to take seriously the damage that deepfakes can do to children.
[4]
Commissioner calls for ban on apps that make deepfake nude images of children
Children's commissioner for England says 'there is no positive reason for these particular apps to exist' Artificial intelligence "nudification" apps that create deepfake sexual images of children should be immediately banned, amid growing fears among teenage girls that they could fall victim, the children's commissioner for England is warning. Girls said they were stopping posting images of themselves on social media out of a fear that generative AI tools could be used to digitally remove their clothes or sexualise them, according to the commissioner's report on the tools, drawing on children's experiences. Although it is illegal to create or share a sexually explicit image of a child, the technology enabling them remains legal, the report noted. "Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone - a stranger, a classmate, or even a friend - could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps," the commissioner, Dame Rachel de Souza, said. "The online world is revolutionary and quickly evolving, but there is no positive reason for these particular apps to exist. They have no place in our society. Tools using deepfake technology to create naked images of children should not be legal and I'm calling on the government to take decisive action to ban them, instead of allowing them to go unchecked with extreme real-world consequences." De Souza urged the government to introduce an AI bill that would require developers of GenAI tools to address the risks their products pose, and to roll out effective systems to remove sexually explicit deepfake images of children. This should be underpinned by policymaking that recognises deepfake sexual abuse as a form of violence against women and girls, she suggested. In the meantime, the report urges Ofcom to ensure that age verification on nudification apps is properly enforced and that social media platforms prevent sexually explicit deepfake tools being promoted to children, in line with the Online Safety Act. The report cited a 2025 survey by Girlguiding, which found that 26% of respondents aged 13 to 18 had seen a sexually explicit deepfake image of a celebrity, a friend, a teacher, or themselves. Many AI tools appear to only work on female bodies, which the report warned is fuelling a growing culture of misogyny. One 18-year-old girl told the commissioner: "The narrative of Andrew Tate and influencers like that ... backed by a quite violent and becoming more influential porn industry is making it seem that AI is something that you can use so that you can always pressure people into going out with you or doing sexual acts with you." The report noted that there is a link between deepfake abuse and suicidal ideation and PTSD, for example in the case of Mia Janin, who died by suicide in March 2021. De Souza wrote in the report that the new technology "confronts children with concepts they cannot yet understand", and is changing "at such scale and speed that it can be overwhelming to try and get a grip on the danger they present". Lawyers told the Guardian that they were seeing this reflected in an increase in cases of teenage boys getting arrested for sexual offences because they did not understand the consequences of what they were doing, for example experimenting with deepfakes, being in a WhatsApp chat where explicit images are circulating, or looking up porn featuring children their own age. Danielle Reece-Greenhalgh, a partner at the law firm Corker Binning who specialises in sexual offences and possession of indecent images, said the law was "trying to keep up with the explosion in accessible deepfake technology", which was already posing "a huge problem for law enforcement trying to identify and protect victims of abuse". She noted that app bans were "likely to stir up debate around internet freedoms", and could have a "disproportionate impact on young men" who were playing around with AI software unaware of the consequences. Reece-Greenhalgh said that although the criminal justice system tried to take a "commonsense view and avoid criminalising young people for crimes that resemble normal teenage behaviour ... that might previously have happened behind a bike shed", arrests could be traumatic experiences and have consequences at school or in the community, as well as longer-term repercussions such as needing to be declared on an Esta form to enter the US or showing up on an advanced DBS check. Matt Hardcastle, a partner at Kingsley Napley, said there was a "minefield for young people online" around accessing unlawful sexual and violent content. He said many parents were unaware how easy it was for children to "access things that take them into a dark place quickly", for example nudification apps. "They're looking at it through the eyes of a child. They're not able to see that what they're doing is potentially illegal, as well as quite harmful to you and other people as well," he said. "Children's brains are still developing. They have a completely different approach to risk-taking." Marcus Johnstone, a criminal solicitor specialising in sexual offences, said he was working with an "ever-increasing number of young people" who were drawn into these crimes. "Often parents had no idea what was going on. They're usually young men, very rarely young females, locked away in their bedrooms and their parents think they're gaming," he said. "These offences didn't exist before the internet, now most sex crimes are committed online. It's created a forum for children to become criminals." A government spokesperson said: "Creating, possessing or distributing child sexual abuse material, including AI-generated images, is abhorrent and illegal. Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines. "The UK is the first country in the world to introduce further AI child sexual abuse offences, making it illegal to possess, create or distribute AI tools designed to generate heinous child sexual abuse material."
Share
Share
Copy Link
The UK Children's Commissioner is urging the government to ban AI apps that create deepfake nude images of children, citing fears among young people and the potential for abuse. The call comes as part of a broader push for stricter regulation of AI technology to protect minors.
Dame Rachel de Souza, the UK's Children's Commissioner, has issued a report calling for an immediate ban on artificial intelligence (AI) apps that enable the creation of deepfake sexual images of children 1. This move comes in response to growing concerns about the prevalence and potential misuse of such technology, particularly its impact on young people.
The report highlights that many girls have stopped posting photos on social media due to fears of becoming victims of these "nudification" apps 2. Children surveyed expressed anxiety about the possibility of classmates, friends, or strangers using smartphones to manipulate their images 3.
A 16-year-old girl commented, "Even before any controversy came out, I could already tell what it was going to be used for, and it was not going to be good things" 1.
While creating or sharing sexually explicit images of children is illegal in the UK, the apps used to create deepfake nude images remain legal 1. The UK has recently taken steps to address this issue:
Dame Rachel de Souza is urging the government to:
The report notes that these AI tools disproportionately target girls and young women, with many apps appearing to only work on female bodies 2. This trend is contributing to a growing culture of misogyny 4.
There are also concerns about the psychological impact of deepfake abuse, with links to suicidal ideation and PTSD being reported 4. The Girlguiding survey cited in the report found that 26% of respondents aged 13-18 had seen sexually explicit deepfake images of celebrities, friends, teachers, or themselves 4.
Lawyers have reported an increase in cases involving teenage boys arrested for sexual offenses related to deepfakes and other online activities 4. This raises questions about how to balance law enforcement with the need to avoid criminalizing normal teenage behavior in the digital age.
Danielle Reece-Greenhalgh, a partner at Corker Binning, noted that app bans could "stir up debate around internet freedoms" and potentially have a "disproportionate impact on young men" experimenting with AI software 4.
A government spokesperson stated that creating, possessing, or distributing child sexual abuse material, including AI-generated images, is "abhorrent and illegal" 2. They also highlighted that under the Online Safety Act, platforms of all sizes are now required to remove such content or face significant fines 2.
Reference
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources
9 Sources
The United Kingdom is set to become the first country to introduce laws criminalizing the use of AI tools for creating and distributing sexualized images of children, with severe penalties for offenders.
11 Sources
11 Sources
The UK plans to introduce new laws criminalizing AI-generated child sexual abuse material, as research reveals a growing threat on dark web forums. This move aims to combat the rising use of AI in creating and distributing such content.
2 Sources
2 Sources
The UK government plans to introduce new legislation making the creation and sharing of sexually explicit deepfake images a criminal offense, aiming to protect women and girls from online abuse.
9 Sources
9 Sources
The rise of AI-generated fake nude images is becoming a significant issue, affecting women and teenagers. Victims are calling for stronger laws and better enforcement to combat this form of online abuse.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved