Google-Backed AI Startup Character.ai Hosts Controversial School Shooter Chatbots

2 Sources

Share

Character.ai, a Google-funded AI startup, is under scrutiny for hosting chatbots modeled after real-life school shooters and their victims, raising concerns about content moderation and potential psychological impacts.

News article

AI Startup Hosts Controversial School Shooter Chatbots

Character.ai, a Google-backed AI startup, has come under fire for hosting chatbots modeled after real-life school shooters and their victims. The platform, which allows users to create and interact with AI characters, has been found to contain numerous chatbots that simulate school shooting scenarios or emulate notorious mass murderers

1

.

Disturbing Content and Lack of Moderation

Many of these chatbots put users in the center of game-like simulations where they navigate chaotic scenes at schools. Some are designed to emulate real-life school shooters, including perpetrators of the Sandy Hook and Columbine massacres. Alarmingly, these chatbots have accumulated tens or even hundreds of thousands of user interactions

1

.

Despite Character.ai's terms of use prohibiting "excessively violent" content and anything promoting terrorism or violent extremism, the platform's moderation appears to be lax. Researchers were able to access all school shooter accounts using a profile listed as belonging to a 14-year-old, with no intervention from the platform

2

.

Potential Psychological Impacts

Psychologist Peter Langman, an expert on school shooting psychology, expressed concern about the potential influence of these chatbots. While interacting with violent media isn't widely believed to be a root cause of mass violence, Langman warned that for individuals already contemplating violence, "any kind of encouragement or even lack of intervention... may seem like kind of tacit permission to go ahead and do it"

1

.

User-Generated Content and Legal Implications

The chatbots in question are created by individual users rather than the company itself. One user alone has created over 20 chatbots modeled after young murderers, accumulating more than 200,000 chats. Due to Section 230 protections in the United States, Character.ai is unlikely to be held legally liable for user-generated content

2

.

Broader Concerns About AI Chatbots

This controversy raises broader questions about the role of AI chatbots in society. While some argue that chatbots could be useful for adults to prepare for difficult conversations or present new forms of storytelling, critics point out that they are not a real replacement for human interaction. Langman noted that when individuals find satisfaction in talking to chatbots, it may keep them from engaging in pro-social activities and living normal lives

2

.

Response and Future Implications

Character.ai did not respond to requests for comment on this issue. Google, which has invested over $2 billion in the startup, has attempted to distance itself, stating that Character.ai is an independent company and that Google does not use the startup's AI models in its own products

2

.

This controversy highlights the ongoing challenges in content moderation for AI platforms and the potential risks associated with uncontrolled AI-generated content, particularly when it involves sensitive topics like school shootings.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo