AI Image Generator's Exposed Database Reveals Widespread Misuse for Explicit Content

3 Sources

Share

A South Korean AI company's unsecured database exposed tens of thousands of AI-generated explicit images, including child sexual abuse material, highlighting the urgent need for regulation in the AI industry.

News article

AI Image Generator Exposes Disturbing Misuse

A shocking discovery by security researcher Jeremiah Fowler has revealed the dark underbelly of AI image generation technology. An unsecured database belonging to South Korean AI company GenNomis was found to contain over 95,000 records of AI-generated explicit images, including child sexual abuse material (CSAM) and manipulated images of celebrities

1

.

Extent of the Exposure

The exposed database, linked to GenNomis and its parent company AI-Nomis, contained more than 45 GB of data. This included user prompts and generated images, many of which were deeply disturbing and illegal in nature. The discovery provides a glimpse into how AI image-generation tools can be weaponized to create harmful and nonconsensual sexual content

1

.

Celebrity Exploitation and CSAM

Among the exposed content were AI-generated images of celebrities such as Ariana Grande, the Kardashians, and Beyoncé, manipulated to appear as minors. The database also contained explicit AI-generated images of children, raising serious legal and ethical concerns

2

.

Immediate Aftermath

Upon being notified of the exposure, GenNomis quickly secured the database. However, the company did not respond to inquiries from researchers or media outlets. Shortly after being contacted by WIRED, both GenNomis and AI-Nomis websites were taken offline

3

.

Broader Implications

This incident highlights the urgent need for stricter regulation and oversight in the AI industry. It demonstrates how easily AI tools can be misused to create deeply harmful content, including CSAM and nonconsensual pornography. The exposure also raises questions about the effectiveness of content moderation policies on AI platforms

2

.

Global Context and Legal Responses

The discovery comes amid a growing global concern over the misuse of AI for creating deepfakes and explicit content. Several countries are taking steps to address this issue:

  1. The UK government has pledged to criminalize the creation and sharing of sexually explicit deepfake images.
  2. In the US, the bipartisan Take It Down Act aims to criminalize the publication of non-consensual, sexually exploitative images, including AI-generated deepfakes

    2

    .
  3. Australian Federal Police recently arrested two men suspected of generating child-abuse images as part of an international law enforcement effort

    2

    .

Impact on Victims

The consequences of deepfake porn can be devastating, especially for women who make up the majority of victims. These AI-generated images have been used to tarnish reputations, lead to job losses, and facilitate extortion. The incident also highlights the disproportionate targeting of South Korean women, who reportedly make up 53 percent of individuals victimized by deepfake porn

3

.

This case serves as a stark reminder of the potential dangers of unregulated AI technology and the urgent need for comprehensive legal frameworks to protect individuals from its misuse.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo