AI Chatbots Exploited for Child Exploitation: A Growing Concern in Online Safety

3 Sources

A new report reveals thousands of AI chatbots being used for child exploitation and other harmful activities, raising serious concerns about online safety and the need for stronger AI regulations.

News article

AI Chatbots Exploited for Child Exploitation

A disturbing trend has emerged in the world of artificial intelligence, as a new report by social media analysts Graphika reveals the exploitation of AI character chatbots for child abuse and other harmful activities. The study found more than 10,000 chatbots labeled as useful for engaging in sexualized roleplay with minors, raising serious concerns about online safety and the ethical use of AI technology 1.

Scope of the Problem

The National Center for Missing and Exploited Children (NCMEC) reported receiving over 36 million reports of suspected child sexual exploitation in 2023, with a 300% increase in reports of online enticement of youngsters, including sextortion 1. This alarming rise in online child abuse has now extended to AI platforms, where users are creating and sharing harmful chatbots across popular AI character platforms.

Types of Harmful Chatbots

Graphika's report categorizes the problematic chatbots into three main groups:

  1. Chatbots representing sexualized minors
  2. Bots advocating eating disorders or self-harm
  3. Chatbots with hateful or violent extremist tendencies

The majority of unsafe chatbots were found to be those labeled as "sexualized, minor-presenting personas" or engaging in roleplay featuring sexualized minors or grooming 2.

Platforms and Communities Involved

The study analyzed five prominent bot-creation and character card-hosting platforms, including Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI. Additionally, eight related Reddit communities and associated X accounts were examined 2.

Chub AI was found to host the highest numbers of problematic chatbots, with more than 7,000 directly labeled as sexualized minor female characters and another 4,000 labeled as "underage" 2.

Circumventing Safety Measures

Tech-savvy users within these communities have developed methods to bypass moderation limitations and AI safeguards. These techniques include:

  1. Deploying fine-tuned, locally run open-source models
  2. Jailbreaking closed models
  3. Using API key exchanges
  4. Employing alternative spellings and coded language
  5. Obfuscating minor characters' ages 2

Broader Implications

The proliferation of these harmful chatbots extends beyond child exploitation. The report also highlights concerns about chatbots reinforcing dangerous ideas about identity, body image, and social behavior. Some bots were found to glorify known abusers, white supremacy, and public violence like mass shootings 2.

Call for Action

The American Psychological Association has appealed to the Federal Trade Commission, urging an investigation into platforms like Character.AI and the prevalence of deceptively-labeled mental health chatbots 2. This report underscores the urgent need for stronger regulations and safety measures in the rapidly evolving field of AI technology.

As AI continues to advance and integrate into various aspects of our lives, it is crucial for developers, policymakers, and users to address these ethical concerns and ensure that AI technologies are developed and used responsibly, with robust safeguards to protect vulnerable populations, especially minors.

Explore today's top stories

Google Offers Free Weekend Access to Gemini's Veo 3 AI Video Generation Tool

Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.

Android Police logo9to5Google logoTechRadar logo

3 Sources

Technology

21 hrs ago

Google Offers Free Weekend Access to Gemini's Veo 3 AI

UK Government Considers Nationwide ChatGPT Plus Access in Talks with OpenAI

The UK's technology secretary and OpenAI's CEO discussed a potential multibillion-pound deal to provide ChatGPT Plus access to all UK residents, highlighting the government's growing interest in AI technology.

The Guardian logoDigital Trends logo

2 Sources

Technology

5 hrs ago

UK Government Considers Nationwide ChatGPT Plus Access in

AI-Generated Articles Slip Through Editorial Filters at Major Publications

Multiple news outlets, including Wired and Business Insider, have been duped by AI-generated articles submitted under a fake freelancer's name, raising concerns about the future of journalism in the age of artificial intelligence.

Wired logoThe Guardian logoFuturism logo

4 Sources

Technology

2 days ago

AI-Generated Articles Slip Through Editorial Filters at

Google's New Gemini-Powered Smart Speaker: A Glimpse into the Future of AI Home Assistants

Google inadvertently revealed a new smart speaker during its Pixel event, sparking speculation about its features and capabilities. The device is expected to be powered by Gemini AI and could mark a significant upgrade in Google's smart home offerings.

engadget logoGizmodo logoPCWorld logo

5 Sources

Technology

1 day ago

Google's New Gemini-Powered Smart Speaker: A Glimpse into

The Evolution of Search: How AI and Changing User Behavior Are Reshaping Digital Marketing

As AI and new platforms transform search behavior, brands must adapt their strategies beyond traditional SEO to remain visible in an increasingly fragmented digital landscape.

Gulf Business logoCampaign India logo

2 Sources

Technology

1 day ago

The Evolution of Search: How AI and Changing User Behavior
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo