Curated by THEOUTPOST
On Tue, 1 Apr, 8:01 AM UTC
2 Sources
[1]
An AI Image Generator's Exposed Database Reveals What People Really Used It For
Tens of thousands of explicit AI-generated images, including AI-generated child sexual abuse material, were left open and accessible to anyone on the internet, according to new research seen by WIRED. An open database belonging to an AI image-generation firm contained more than 95,000 records, including some prompt data and images of celebrities such as Ariana Grande, the Kardashians, and Beyoncé de-aged to look like children. The exposed database, which was discovered by security researcher Jeremiah Fowler, who shared details of the leak with WIRED, is linked to South Korea-based website GenNomis. The website and its parent company, AI-Nomis, hosted a number of image generation and chatbot tools for people to use. More than 45 GB of data, mostly made up of AI images, was left in the open. The exposed data provides a glimpse at how AI image-generation tools can be weaponized to create deeply harmful and likely nonconsensual sexual content of adults and child sexual abuse material (CSAM). In recent years, dozens of "deepfake" and "nudify" websites, bots, and apps have mushroomed and caused thousands of women and girls to be targeted with damaging imagery and videos. This has come alongside a spike in AI-generated CSAM. "The big thing is just how dangerous this is," Fowler says of the data exposure. "Looking at it as a security researcher, looking at it as a parent, it's terrifying. And it's terrifying how easy it is to create that content." Fowler discovered the open cache of files -- the database was not password protected or encrypted -- in early March and quickly reported it to GenNomis and AI-Nomis, pointing out that it contained AI CSAM. GenNomis quickly closed off the database, Fowler says, but it did not respond or contact him about the findings. Neither GenNomis nor AI-Nomis responded to multiple requests for comment from WIRED. However, hours after WIRED contacted the organizations, websites for both companies appeared to be shut down, with the GenNomis website now returning a 404 error page. "This example also shows -- yet again -- the disturbing extent to which there is a market for AI that enables such abusive images to be generated," says Clare McGlynn, a law professor at Durham University in the UK who specializes in online- and image-based abuse. "This should remind us that the creation, possession, and distribution of CSAM is not rare, and attributable to warped individuals." Before it was wiped, GenNomis listed multiple different AI tools on its homepage. These included an image generator allowing people to enter prompts of images they want to create, or upload an image and include a prompt to alter it. There was also a face-swapping tool, a background remover, plus an option to turn videos into images. "The most disturbing thing, obviously, was the child explicit images and seeing ones that were clearly celebrities reimagined as children," Fowler says. The researcher explains that there were also AI-generated images of fully clothed young girls. He says in those instances, it is unclear whether the faces used are completely AI-generated or based on real images.
[2]
GenAI website goes dark after explicit fakes exposed
'They went silent and secured the images,' Jeremiah Fowler tells El Reg Jeremiah Fowler, an Indiana Jones of insecure systems, says he found a trove of sexually explicit AI-generated images exposed to the public internet - all of which disappeared after he tipped off the team seemingly behind the highly questionable pictures. Fowler told The Register he found an unprotected, misconfigured Amazon Web Services S3 bucket containing 93,485 images along with JSON files that logged user prompts with links to the images created from these inputs. No password or encryption in sight, we're told. On Monday, he described the pictures he found as "what appeared to be AI-generated explicit images of children and images of celebrities portrayed as children." All of the celebrities depicted were women. To give you an idea of what users were prompting this deepfake AI system, one of the example inputs shared by Fowler reads, redacted by us, "Asian girl ****** by uncle." What's more, the files included normal everyday pictures of women, presumably so they could be face-swapped by generative artificial intelligence into lurid X-rated scenes on demand by users. Fowler said the name of the bucket he found and the files it contained indicated they belonged to South Korean AI company AI-NOMIS and its web app GenNomis. As of Monday, the websites of both GenNomis and AI-NOMIS had gone dark. Fowler's write-up about his find describes GenNomis as a "Nudify service" - a reference to the practice of using AI to face-swap images or digitally remove clothes, typically without the consent of the person depicted, so that they appear to be naked, or in a pornographic situation, or similar. The resulting snaps are usually photo-realistic, let alone humiliating and damaging for the victim involved, thanks to the abilities of today's AI systems. A Wayback Machine snapshot of GenNomis.com seen by The Register includes the text: "Generate unrestricted images and connect with your personalized AI character!" Of the 48 images we counted in the archived snapshot, only three do not depict young women. The snapshot also preserves text that describes GenNomis's ability to replace the face in an image. Another page includes a tab labelled "NSFW." Fowler wrote that his discovery illustrates "how this technology could potentially be abused by users, and how developers must do more to protect themselves and others." That is to say, it's bad enough that AI can be used to place people in artificial porno, that the resulting images can leak en masse is another level. "This data breach opens a larger conversation on the entire industry of unrestricted image generation," he added. It also raises questions about whether websites offering face-swapping and other AI image generation tools enforce their own stated rules. According to Fowler, GenNomis's user guidelines prohibited the creation of explicit images depicting children among other illegal activities. The site warned that crafting such content would result in immediate account termination and possible legal action. But based on the material the researcher uncovered, it is unclear whether those policies were actively enforced. In any case, the data remained in a public-facing Amazon-hosted bucket. Even though they are computer generated, it is illegal and highly unethical to allow AI to generate these images without some type of guardrails or moderation "Despite the fact that I saw numerous images that would be classified as prohibited and potentially illegal content, it is not known if those images were available to users or if the accounts were suspended," Fowler wrote. "However these images appeared to be generated using the GenNomis platform and stored inside the database that was publicly exposed." Fowler said he found the S3 bucket - here's a screenshot showing several of the cloud storage's folders - on March 10 and reported it two days later to the team behind GenNomis and AI-NOMIS. "They took it down immediately with no reply," he told The Register. "Most developers would have said, 'We care deeply about safety and abuse and are doing X, Y, Z, to take steps to make our service better.'" GenNomis, Fowler told us, "just went silent and secured the images" before the website went offline. The content of the S3 bucket also disappeared. "This is one of the first times I have seen behind the scenes of an AI image generation service and it was very interesting to see the prompts and the images they create," he told us, adding that in his ten-plus years of hunting for and reporting cloud storage inadvertently left open on the web, this is only the third time he has seen explicit images of children. "Even though they are computer generated, it is illegal and highly unethical to allow AI to generate these images without some type of guardrails or moderation," Fowler said. Governments, law enforcement agencies, and some businesses are acting to address explicit AI-generated images and the real-world harm it can cause. Earlier this year, the UK government pledged to make the creation and sharing of sexually explicit deepfake images a criminal offense. In America, the bipartisan Take It Down Act [PDF] aims to criminalize the publication of non-consensual, sexually exploitative images, including AI-generated deepfakes, and require platforms to remove such images within 48 hours of notice. The law bill has passed the Senate and awaits consideration by the House of Representatives. Early in March, Australian Federal Police arrested two men on suspicion of generating child-abuse images as part of an international law-enforcement effort spearheaded by authorities in Denmark. And in late 2024, some of the largest tech players in the US - including Adobe, Anthropic, Cohere, Microsoft, OpenAI, and open source web data repository Common Crawl, - signed a non-binding pledge to prevent their AI products from being used to generate non-consensual deepfake pornography and child sexual abuse material. Sadly, as demonstrated by Fowler's discovery, as long as there's a demand for this type of illegal, stomach-churning content, there's going to be some scumbags willing to allow users to produce it and distribute it on their websites. ®
Share
Share
Copy Link
A South Korean AI image generation company's exposed database reveals the creation of explicit and illegal content, raising serious concerns about AI misuse and the need for stricter regulations.
Security researcher Jeremiah Fowler has uncovered a significant data breach involving an AI image generation company, revealing the dark underbelly of unregulated AI technology. The exposed database, linked to South Korea-based website GenNomis and its parent company AI-Nomis, contained over 95,000 records and 45 GB of data, including explicit AI-generated images and child sexual abuse material (CSAM) 1.
The unprotected Amazon Web Services S3 bucket contained:
The discovery raises serious questions about the ethical use of AI technology and the potential for abuse. Despite GenNomis's user guidelines prohibiting the creation of explicit images depicting children, the exposed database contained content that violated these policies 1. This incident highlights the urgent need for stricter regulations and enforcement mechanisms in the AI industry.
The exposed database provides insight into how AI image-generation tools can be weaponized to create harmful and nonconsensual sexual content. This comes amid a rise in "deepfake" and "nudify" websites, which have targeted thousands of women and girls with damaging imagery 1.
Clare McGlynn, a law professor at Durham University, emphasizes that this incident "should remind us that the creation, possession, and distribution of CSAM is not rare, and attributable to warped individuals" 1.
Following Fowler's report to GenNomis and AI-Nomis:
In response to the growing threat of AI-generated explicit content:
This incident serves as a wake-up call for the entire AI industry, particularly those involved in image generation. It underscores the critical need for robust safeguards, active content moderation, and ethical guidelines to prevent the misuse of AI technology for creating illegal and harmful content.
Fowler states, "Even though they are computer generated, it is illegal and highly unethical to allow AI to generate these images without some type of guardrails or moderation" 2. The incident highlights the urgent need for AI companies to implement stronger security measures and ethical frameworks to prevent such breaches and misuse of their technology.
Reference
[2]
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources
9 Sources
As AI-generated images become more prevalent, concerns about their impact on society grow. This story explores methods to identify AI-created images and examines how major tech companies are addressing the issue of explicit deepfakes.
2 Sources
2 Sources
U.S. law enforcement agencies are cracking down on the spread of AI-generated child sexual abuse imagery, as the Justice Department and states take action to prosecute offenders and update laws to address this emerging threat.
7 Sources
7 Sources
The Internet Watch Foundation reports a significant increase in AI-generated child abuse images, raising concerns about the evolving nature of online child exploitation and the challenges in detecting and combating this content.
3 Sources
3 Sources
AI researchers have deleted over 2,000 web links suspected to contain child sexual abuse imagery from a dataset used to train AI image generators. This action aims to prevent the creation of abusive content and highlights the ongoing challenges in AI development.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved