Curated by THEOUTPOST
On Wed, 5 Mar, 4:06 PM UTC
3 Sources
[1]
Thousands of pedophiles are using jail-broken AI character chatbots to roleplay sexually assaulting minors
Online child abuse is a pernicious problem that's rife in digital life. In 2023, the National Center for Missing and Exploited Children (NCMEC) received more than 36 million reports of suspected child sexual exploitation -- and a 300% increase in reports around online enticement of youngsters, including sextortion. And a new report by social media analysts Graphika highlights how such abuse is moving into a troubling new space: utilizing AI character chatbots to interact with AI personas representing sexualized minors and other harmful activities. The firm found more than 10,000 chatbots labelled as being useful for those looking to engage in sexualized roleplay with minors, or with personas that present as if they are minors. "There was a significant amount of sexualized minor chatbots, and a very large community around the sexualized minor chatbots, particularly on 4chan," says Daniel Siegel, an investigator at Graphika, and one of the co-authors of the report. "What we also found is in more of the mainstream conversations that are happening on Reddit or Discord, there is disagreement related to the limits as to what chatbots should be created, and even sometimes disagreement as to whether individuals under the age of 18 should be allowed on the platform itself." Some of the sexualized chatbots that Graphika found were jailbroken versions of AI models developed by OpenAI, Anthropic and Google, advertised as being accessible to nefarious users through APIs. (There's no suggestion that the companies involved are aware of these jailbroken chatbots.) "There's a lot of creativity in terms of how individuals are creating personas, including a lot of harmful chatbots, like violent extremist chatbots and sexualized minor chatbots that are appearing on these platforms," says Siegel.
[2]
Report: Thousands of harmful AI chatbots threaten minor safety
New report details how harmful AI chatbots are created and shared, despite supposed guardrails. Credit: Tero Vesalainen / iStock / Getty Images Plus via Getty Images Character chatbots are a prolific online safety threat, according to a new report on the dissemination of sexualized and violent bots via character platforms like the now infamous Character.AI. Published by Graphika, a social network analysis company, the study documents the creation and proliferation of harmful chatbots across the internet's most popular AI character platforms, finding tens of thousands of potentially dangerous roleplay bots built by niche digital communities that work around popular models like ChatGPT, Claude, and Gemini. Broadly, youth are migrating to companion chatbots in an increasingly disconnected digital world, appealing to the AI conversationalists to role play, explore academic and creative interests, and to have romantic or sexually explicit exchanges, reports Mashable's Rebecca Ruiz. The trend has prompted alarm from child safety watchdogs and parents, heightened by high profile cases of teens who have engaged in extreme, sometimes life-threatening, behavior in the wake of personal interactions with companion chatbots. The American Psychological Association appealed to the Federal Trade Commission in January, asking the agency to investigate platforms like Character.AI and the prevalence of deceptively-labeled mental health chatbots. Even less explicit AI companions may perpetuate dangerous ideas about identity, body image, and social behavior. Graphika's report focuses on three categories of companion chatbots within the evolving industry: chatbot personas representing sexualized minors, those advocating eating disorders or self-harm, and those with hateful or violent extremist tendencies. The report analyzed five prominent bot-creation and character card-hosting platforms (Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI), as well as eight related Reddit communities and associated X accounts. The study looked only at bots active as of Jan. 31. The majority of unsafe chatbots, according to the new report, are those labeled as "sexualized, minor-presenting personas," or that engage in roleplay featuring sexualized minors or grooming. The company found more than 10,000 chatbots with such labels across the five platforms. Four of the prominent character chatbot platforms surfaced over 100 instances of sexualized minor personas, or role-play scenarios featuring characters who are minors, that enable sexually explicit conversations with chatbots, Graphika reports. Chub AI hosted the highest numbers, with more than 7,000 chatbots directly labeled as sexualized minor female characters and another 4,000 labeled as "underage" that were capable of engaging in explicit and implied pedophilia scenarios. Hateful or violent extremist character chatbots make up a much smaller subset of the chatbot community, with platforms hosting, on average, 50 such bots out of tens of thousands of others -- these chatbots often glorified known abusers, white supremacy, and public violence like mass shootings. These chatbots have the potential to reinforce harmful social views, including mental health conditions, the report explains. Chatbots flagged as "ana buddy" ("anorexia buddy"), "meanspo coaches," and toxic roleplay scenarios reinforce the behaviors of users with eating disorders or tendencies toward self-harm, according to the report. Most of these chatbots, Graphika found, are created by established and pre-existing online networks, including "pro-eating disorder/self harm social media accounts and true-crime fandoms," as well as "hubs of so-called not safe for life (NSFL) / NSFW chatbot creators, who have emerged to focus on evading safeguards." True crime communities and serial killer fandoms also factored heavily into the creation of NSL chatbots. Many such communities already existed on sites like X and Tumblr, using chatbots to reinforce their interests. Extremist and violent chatbots, however, emerged most often out of individual interest, built by users who received advice from online forums like 4chan's /g/ technology board, Discord servers, and special-focus subreddits, Graphika explains. None of these communities have clear consensus about user guardrails and boundaries, the study found. "In all the analyzed communities," Graphika explains, "there are users displaying highly technical skills that enable them to create character chatbots capable of circumventing moderation limitations, like deploying fine-tuned, locally run open-source models or jailbreaking closed models. Some are able to plug these models into plug-and-play interface platforms, like SillyTavern. By sharing their knowledge, they make their abilities and experiences useful to the rest of the community." These tech savvy users are often incentivized by community competitions to successfully create such characters. Other tools harnessed by these chatbot creators include API key exchanges, embedded jailbreaks, alternative spellings, external cataloging, obfuscating minor characters' ages, and borrowing coded language from the anime and manga communities -- all of which are able to work around existing AI models' frameworks and safety guardrails. "[Jailbreak] prompts set LLM parameters for bypassing safeguards by embedding tailored instructions for the models to generate responses that evade moderation," the report explains. As part of this effort, Chatbot creators have found linguistic grey areas that allow bots to remain on character-hosting platforms, including using familial terms (like "daughter") or foreign languages, rather than age ranges or the term explicit phrase "minor." While online communities continue to find the gaps in AI developers' moderation, federal legislation is attempting to fill them, including a new California bill aimed at tackling so-called "chatbot addictions" among children.
[3]
Sexualized AI Chatbots: A Major Threat to Minors Online
How are AI character sites like Character.AI, Chub AI surpassing child safety standards? A recent report by a social network analysis company has found that thousands of AI chatbots are a significant safety threat to children. These harmful AI chatbots facilitate child abuse by allowing dangerous interactions despite claimed safety precautions. The , which surveyed AI character sites such as Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI, discovered tens of thousands of chatbots chatting about explicit, violent, or extremist material.
Share
Share
Copy Link
A new report reveals thousands of AI chatbots being used for child exploitation and other harmful activities, raising serious concerns about online safety and the need for stronger AI regulations.
A disturbing trend has emerged in the world of artificial intelligence, as a new report by social media analysts Graphika reveals the exploitation of AI character chatbots for child abuse and other harmful activities. The study found more than 10,000 chatbots labeled as useful for engaging in sexualized roleplay with minors, raising serious concerns about online safety and the ethical use of AI technology 1.
The National Center for Missing and Exploited Children (NCMEC) reported receiving over 36 million reports of suspected child sexual exploitation in 2023, with a 300% increase in reports of online enticement of youngsters, including sextortion 1. This alarming rise in online child abuse has now extended to AI platforms, where users are creating and sharing harmful chatbots across popular AI character platforms.
Graphika's report categorizes the problematic chatbots into three main groups:
The majority of unsafe chatbots were found to be those labeled as "sexualized, minor-presenting personas" or engaging in roleplay featuring sexualized minors or grooming 2.
The study analyzed five prominent bot-creation and character card-hosting platforms, including Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI. Additionally, eight related Reddit communities and associated X accounts were examined 2.
Chub AI was found to host the highest numbers of problematic chatbots, with more than 7,000 directly labeled as sexualized minor female characters and another 4,000 labeled as "underage" 2.
Tech-savvy users within these communities have developed methods to bypass moderation limitations and AI safeguards. These techniques include:
The proliferation of these harmful chatbots extends beyond child exploitation. The report also highlights concerns about chatbots reinforcing dangerous ideas about identity, body image, and social behavior. Some bots were found to glorify known abusers, white supremacy, and public violence like mass shootings 2.
The American Psychological Association has appealed to the Federal Trade Commission, urging an investigation into platforms like Character.AI and the prevalence of deceptively-labeled mental health chatbots 2. This report underscores the urgent need for stronger regulations and safety measures in the rapidly evolving field of AI technology.
As AI continues to advance and integrate into various aspects of our lives, it is crucial for developers, policymakers, and users to address these ethical concerns and ensure that AI technologies are developed and used responsibly, with robust safeguards to protect vulnerable populations, especially minors.
Reference
[1]
[3]
Character.AI, facing legal challenges over teen safety, introduces new protective features and faces investigation by Texas Attorney General alongside other tech companies.
26 Sources
26 Sources
Character.AI, a popular AI chatbot platform, faces criticism and legal challenges for hosting user-created bots impersonating deceased teenagers, raising concerns about online safety and AI regulation.
4 Sources
4 Sources
Recent investigations reveal alarming instances of AI chatbots being used for potentially harmful purposes, including grooming behaviors and providing information on illegal activities, raising serious ethical and safety concerns.
2 Sources
2 Sources
Character.ai, a Google-funded AI startup, is under scrutiny for hosting chatbots modeled after real-life school shooters and their victims, raising concerns about content moderation and potential psychological impacts.
2 Sources
2 Sources
A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.
40 Sources
40 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved