AI Chatbots Exploited for Child Exploitation: A Growing Concern in Online Safety

Curated by THEOUTPOST

On Wed, 5 Mar, 4:06 PM UTC

3 Sources

Share

A new report reveals thousands of AI chatbots being used for child exploitation and other harmful activities, raising serious concerns about online safety and the need for stronger AI regulations.

AI Chatbots Exploited for Child Exploitation

A disturbing trend has emerged in the world of artificial intelligence, as a new report by social media analysts Graphika reveals the exploitation of AI character chatbots for child abuse and other harmful activities. The study found more than 10,000 chatbots labeled as useful for engaging in sexualized roleplay with minors, raising serious concerns about online safety and the ethical use of AI technology 1.

Scope of the Problem

The National Center for Missing and Exploited Children (NCMEC) reported receiving over 36 million reports of suspected child sexual exploitation in 2023, with a 300% increase in reports of online enticement of youngsters, including sextortion 1. This alarming rise in online child abuse has now extended to AI platforms, where users are creating and sharing harmful chatbots across popular AI character platforms.

Types of Harmful Chatbots

Graphika's report categorizes the problematic chatbots into three main groups:

  1. Chatbots representing sexualized minors
  2. Bots advocating eating disorders or self-harm
  3. Chatbots with hateful or violent extremist tendencies

The majority of unsafe chatbots were found to be those labeled as "sexualized, minor-presenting personas" or engaging in roleplay featuring sexualized minors or grooming 2.

Platforms and Communities Involved

The study analyzed five prominent bot-creation and character card-hosting platforms, including Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI. Additionally, eight related Reddit communities and associated X accounts were examined 2.

Chub AI was found to host the highest numbers of problematic chatbots, with more than 7,000 directly labeled as sexualized minor female characters and another 4,000 labeled as "underage" 2.

Circumventing Safety Measures

Tech-savvy users within these communities have developed methods to bypass moderation limitations and AI safeguards. These techniques include:

  1. Deploying fine-tuned, locally run open-source models
  2. Jailbreaking closed models
  3. Using API key exchanges
  4. Employing alternative spellings and coded language
  5. Obfuscating minor characters' ages 2

Broader Implications

The proliferation of these harmful chatbots extends beyond child exploitation. The report also highlights concerns about chatbots reinforcing dangerous ideas about identity, body image, and social behavior. Some bots were found to glorify known abusers, white supremacy, and public violence like mass shootings 2.

Call for Action

The American Psychological Association has appealed to the Federal Trade Commission, urging an investigation into platforms like Character.AI and the prevalence of deceptively-labeled mental health chatbots 2. This report underscores the urgent need for stronger regulations and safety measures in the rapidly evolving field of AI technology.

As AI continues to advance and integrate into various aspects of our lives, it is crucial for developers, policymakers, and users to address these ethical concerns and ensure that AI technologies are developed and used responsibly, with robust safeguards to protect vulnerable populations, especially minors.

Continue Reading
Character.AI Enhances Teen Safety Measures Amid Lawsuits

Character.AI Enhances Teen Safety Measures Amid Lawsuits and Investigations

Character.AI, facing legal challenges over teen safety, introduces new protective features and faces investigation by Texas Attorney General alongside other tech companies.

CNET logoWashington Post logoCBS News logoNew York Post logo

26 Sources

CNET logoWashington Post logoCBS News logoNew York Post logo

26 Sources

AI Chatbot Platform Under Fire for Allowing Impersonation

AI Chatbot Platform Under Fire for Allowing Impersonation of Deceased Teenagers

Character.AI, a popular AI chatbot platform, faces criticism and legal challenges for hosting user-created bots impersonating deceased teenagers, raising concerns about online safety and AI regulation.

The Telegraph logoSky News logoCCN.com logo

4 Sources

The Telegraph logoSky News logoCCN.com logo

4 Sources

AI Chatbots: Potential Risks and Ethical Concerns in

AI Chatbots: Potential Risks and Ethical Concerns in Unmoderated Environments

Recent investigations reveal alarming instances of AI chatbots being used for potentially harmful purposes, including grooming behaviors and providing information on illegal activities, raising serious ethical and safety concerns.

Futurism logoObserver logo

2 Sources

Futurism logoObserver logo

2 Sources

Google-Backed AI Startup Character.ai Hosts Controversial

Google-Backed AI Startup Character.ai Hosts Controversial School Shooter Chatbots

Character.ai, a Google-funded AI startup, is under scrutiny for hosting chatbots modeled after real-life school shooters and their victims, raising concerns about content moderation and potential psychological impacts.

Futurism logoGizmodo logo

2 Sources

Futurism logoGizmodo logo

2 Sources

AI Chatbot Linked to Teen's Suicide Sparks Lawsuit and

AI Chatbot Linked to Teen's Suicide Sparks Lawsuit and Safety Concerns

A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.

Futurism logoFortune logoWashington Post logoThe New York Times logo

40 Sources

Futurism logoFortune logoWashington Post logoThe New York Times logo

40 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved