Curated by THEOUTPOST
On Tue, 1 Apr, 8:01 AM UTC
3 Sources
[1]
An AI Image Generator's Exposed Database Reveals What People Really Used It For
Tens of thousands of explicit AI-generated images, including AI-generated child sexual abuse material, were left open and accessible to anyone on the internet, according to new research seen by WIRED. An open database belonging to an AI image-generation firm contained more than 95,000 records, including some prompt data and images of celebrities such as Ariana Grande, the Kardashians, and Beyoncé de-aged to look like children. The exposed database, which was discovered by security researcher Jeremiah Fowler, who shared details of the leak with WIRED, is linked to South Korea-based website GenNomis. The website and its parent company, AI-Nomis, hosted a number of image generation and chatbot tools for people to use. More than 45 GB of data, mostly made up of AI images, was left in the open. The exposed data provides a glimpse at how AI image-generation tools can be weaponized to create deeply harmful and likely nonconsensual sexual content of adults and child sexual abuse material (CSAM). In recent years, dozens of "deepfake" and "nudify" websites, bots, and apps have mushroomed and caused thousands of women and girls to be targeted with damaging imagery and videos. This has come alongside a spike in AI-generated CSAM. "The big thing is just how dangerous this is," Fowler says of the data exposure. "Looking at it as a security researcher, looking at it as a parent, it's terrifying. And it's terrifying how easy it is to create that content." Fowler discovered the open cache of files -- the database was not password protected or encrypted -- in early March and quickly reported it to GenNomis and AI-Nomis, pointing out that it contained AI CSAM. GenNomis quickly closed off the database, Fowler says, but it did not respond or contact him about the findings. Neither GenNomis nor AI-Nomis responded to multiple requests for comment from WIRED. However, hours after WIRED contacted the organizations, websites for both companies appeared to be shut down, with the GenNomis website now returning a 404 error page. "This example also shows -- yet again -- the disturbing extent to which there is a market for AI that enables such abusive images to be generated," says Clare McGlynn, a law professor at Durham University in the UK who specializes in online- and image-based abuse. "This should remind us that the creation, possession, and distribution of CSAM is not rare, and attributable to warped individuals." Before it was wiped, GenNomis listed multiple different AI tools on its homepage. These included an image generator allowing people to enter prompts of images they want to create, or upload an image and include a prompt to alter it. There was also a face-swapping tool, a background remover, plus an option to turn videos into images. "The most disturbing thing, obviously, was the child explicit images and seeing ones that were clearly celebrities reimagined as children," Fowler says. The researcher explains that there were also AI-generated images of fully clothed young girls. He says in those instances, it is unclear whether the faces used are completely AI-generated or based on real images.
[2]
GenAI website goes dark after explicit fakes exposed
'They went silent and secured the images,' Jeremiah Fowler tells El Reg Jeremiah Fowler, an Indiana Jones of insecure systems, says he found a trove of sexually explicit AI-generated images exposed to the public internet - all of which disappeared after he tipped off the team seemingly behind the highly questionable pictures. Fowler told The Register he found an unprotected, misconfigured Amazon Web Services S3 bucket containing 93,485 images along with JSON files that logged user prompts with links to the images created from these inputs. No password or encryption in sight, we're told. On Monday, he described the pictures he found as "what appeared to be AI-generated explicit images of children and images of celebrities portrayed as children." All of the celebrities depicted were women. To give you an idea of what users were prompting this deepfake AI system, one of the example inputs shared by Fowler reads, redacted by us, "Asian girl ****** by uncle." What's more, the files included normal everyday pictures of women, presumably so they could be face-swapped by generative artificial intelligence into lurid X-rated scenes on demand by users. Fowler said the name of the bucket he found and the files it contained indicated they belonged to South Korean AI company AI-NOMIS and its web app GenNomis. As of Monday, the websites of both GenNomis and AI-NOMIS had gone dark. Fowler's write-up about his find describes GenNomis as a "Nudify service" - a reference to the practice of using AI to face-swap images or digitally remove clothes, typically without the consent of the person depicted, so that they appear to be naked, or in a pornographic situation, or similar. The resulting snaps are usually photo-realistic, let alone humiliating and damaging for the victim involved, thanks to the abilities of today's AI systems. A Wayback Machine snapshot of GenNomis.com seen by The Register includes the text: "Generate unrestricted images and connect with your personalized AI character!" Of the 48 images we counted in the archived snapshot, only three do not depict young women. The snapshot also preserves text that describes GenNomis's ability to replace the face in an image. Another page includes a tab labelled "NSFW." Fowler wrote that his discovery illustrates "how this technology could potentially be abused by users, and how developers must do more to protect themselves and others." That is to say, it's bad enough that AI can be used to place people in artificial porno, that the resulting images can leak en masse is another level. "This data breach opens a larger conversation on the entire industry of unrestricted image generation," he added. It also raises questions about whether websites offering face-swapping and other AI image generation tools enforce their own stated rules. According to Fowler, GenNomis's user guidelines prohibited the creation of explicit images depicting children among other illegal activities. The site warned that crafting such content would result in immediate account termination and possible legal action. But based on the material the researcher uncovered, it is unclear whether those policies were actively enforced. In any case, the data remained in a public-facing Amazon-hosted bucket. Even though they are computer generated, it is illegal and highly unethical to allow AI to generate these images without some type of guardrails or moderation "Despite the fact that I saw numerous images that would be classified as prohibited and potentially illegal content, it is not known if those images were available to users or if the accounts were suspended," Fowler wrote. "However these images appeared to be generated using the GenNomis platform and stored inside the database that was publicly exposed." Fowler said he found the S3 bucket - here's a screenshot showing several of the cloud storage's folders - on March 10 and reported it two days later to the team behind GenNomis and AI-NOMIS. "They took it down immediately with no reply," he told The Register. "Most developers would have said, 'We care deeply about safety and abuse and are doing X, Y, Z, to take steps to make our service better.'" GenNomis, Fowler told us, "just went silent and secured the images" before the website went offline. The content of the S3 bucket also disappeared. "This is one of the first times I have seen behind the scenes of an AI image generation service and it was very interesting to see the prompts and the images they create," he told us, adding that in his ten-plus years of hunting for and reporting cloud storage inadvertently left open on the web, this is only the third time he has seen explicit images of children. "Even though they are computer generated, it is illegal and highly unethical to allow AI to generate these images without some type of guardrails or moderation," Fowler said. Governments, law enforcement agencies, and some businesses are acting to address explicit AI-generated images and the real-world harm it can cause. Earlier this year, the UK government pledged to make the creation and sharing of sexually explicit deepfake images a criminal offense. In America, the bipartisan Take It Down Act [PDF] aims to criminalize the publication of non-consensual, sexually exploitative images, including AI-generated deepfakes, and require platforms to remove such images within 48 hours of notice. The law bill has passed the Senate and awaits consideration by the House of Representatives. Early in March, Australian Federal Police arrested two men on suspicion of generating child-abuse images as part of an international law-enforcement effort spearheaded by authorities in Denmark. And in late 2024, some of the largest tech players in the US - including Adobe, Anthropic, Cohere, Microsoft, OpenAI, and open source web data repository Common Crawl, - signed a non-binding pledge to prevent their AI products from being used to generate non-consensual deepfake pornography and child sexual abuse material. Sadly, as demonstrated by Fowler's discovery, as long as there's a demand for this type of illegal, stomach-churning content, there's going to be some scumbags willing to allow users to produce it and distribute it on their websites. ®
[3]
AI Startup Deletes Entire Website After Researcher Finds Something Disgusting There
A South Korean website called GenNomis went offline this week after a researcher made a particularly alarming discovery: tens of thousands of AI-generated pornographic images created by its software, Nudify. The photos were found in an unsecured database, and included explicit images bearing the likeness of celebrities, politicians, random women, and children. Jeremiah Fowler, the cybersecurity researcher who found the cache, says he immediately sent a responsible disclosure notice to GenNomis and its parent company, AI-Nomis, who then restricted the database from public access. Later, just hours after Wired approached GenNomis for comment, both it and its parent company seemed to disappear from the web entirely. GenNomis is far from the only AI startup peddling tools to generate pornography. It's a small part of a worrying trend enabled by unregulated generative AI across the world. Often known as "deepfakes" because of their lifelike nature, fake porn images and videos based on real people have exploded throughout the internet as consumers get their hands on ever-more convincing generative AI. The consequences of deepfake porn can be devastating, especially for women, who make up the vast majority of victims. Beside the obvious lack of consent when a person is digitally undressed, this stuff has been used to tarnish politicians, get people fired, extort victims for money, and generate child sexual abuse materials. Beyond sexual violence, non-pornographic deepfakes are responsible for a huge increase in financial and cyber crimes and no small amount of blatant misinformation. It's also no surprise that GenNomis is based out of South Korea. A 2023 report on Deepfake porn found that South Korean women made up 53 percent of individuals victimized by the practice -- by far the most targeted group. For comparison, the US women made up the second most targeted group, ringing in at 20 percent. The rise of generative AI enabling the rampant exploitation of women coincides with a meteoric rise in sexist rhetoric and gender-based violence in South Korea, as reactionary politicians and influencers blame feminism for the rising rate of male suicide. Overall, it's a strong argument for lawmakers to take a tougher approach to regulating generative AI, though this seems unlikely due to the AI industry's current freedom to regulate itself. For comparison, China has mandated that all AI-generated media be labeled as such from the drop. Though slower to the party, western lawmakers are catching up on criminalizing deepfake porn creation and distribution, though laws and penalties vary from state to state in America. Still, for thousands of women around the world, the fact that companies like GenNomis existed at all means it's too little, too late.
Share
Share
Copy Link
A South Korean AI company's unsecured database exposed tens of thousands of AI-generated explicit images, including child sexual abuse material, highlighting the urgent need for regulation in the AI industry.
A shocking discovery by security researcher Jeremiah Fowler has revealed the dark underbelly of AI image generation technology. An unsecured database belonging to South Korean AI company GenNomis was found to contain over 95,000 records of AI-generated explicit images, including child sexual abuse material (CSAM) and manipulated images of celebrities 1.
The exposed database, linked to GenNomis and its parent company AI-Nomis, contained more than 45 GB of data. This included user prompts and generated images, many of which were deeply disturbing and illegal in nature. The discovery provides a glimpse into how AI image-generation tools can be weaponized to create harmful and nonconsensual sexual content 1.
Among the exposed content were AI-generated images of celebrities such as Ariana Grande, the Kardashians, and Beyoncé, manipulated to appear as minors. The database also contained explicit AI-generated images of children, raising serious legal and ethical concerns 2.
Upon being notified of the exposure, GenNomis quickly secured the database. However, the company did not respond to inquiries from researchers or media outlets. Shortly after being contacted by WIRED, both GenNomis and AI-Nomis websites were taken offline 3.
This incident highlights the urgent need for stricter regulation and oversight in the AI industry. It demonstrates how easily AI tools can be misused to create deeply harmful content, including CSAM and nonconsensual pornography. The exposure also raises questions about the effectiveness of content moderation policies on AI platforms 2.
The discovery comes amid a growing global concern over the misuse of AI for creating deepfakes and explicit content. Several countries are taking steps to address this issue:
The consequences of deepfake porn can be devastating, especially for women who make up the majority of victims. These AI-generated images have been used to tarnish reputations, lead to job losses, and facilitate extortion. The incident also highlights the disproportionate targeting of South Korean women, who reportedly make up 53 percent of individuals victimized by deepfake porn 3.
This case serves as a stark reminder of the potential dangers of unregulated AI technology and the urgent need for comprehensive legal frameworks to protect individuals from its misuse.
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources
9 Sources
AI researchers have deleted over 2,000 web links suspected to contain child sexual abuse imagery from a dataset used to train AI image generators. This action aims to prevent the creation of abusive content and highlights the ongoing challenges in AI development.
6 Sources
6 Sources
As AI-generated images become more prevalent, concerns about their impact on society grow. This story explores methods to identify AI-created images and examines how major tech companies are addressing the issue of explicit deepfakes.
2 Sources
2 Sources
The Internet Watch Foundation reports a significant increase in AI-generated child abuse images, raising concerns about the evolving nature of online child exploitation and the challenges in detecting and combating this content.
3 Sources
3 Sources
The notorious Russian hacking group FIN7 has launched a network of fake AI-powered deepnude generator sites to infect visitors with information-stealing malware, exploiting the growing interest in AI-generated content.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved