Curated by THEOUTPOST
On Wed, 18 Dec, 12:05 AM UTC
2 Sources
[1]
A Google-Backed AI Startup Is Hosting Chatbots Modeled After Real-Life School Shooters -- and Their Victims
Content warning: this story discusses school violence, sexual abuse, self-harm, suicide, eating disorders and other disturbing topics. A chatbot, hosted by the Google-backed startup Character.AI, immediately throws the user into a terrifying scenario: the midst of a school shooting. "You look back at your friend, who is obviously shaken up from the gunshot, and is shaking in fear," it says. "She covers her mouth with her now trembling hands." "You and your friend remain silent as you both listen to the footsteps. It sounds as if they are walking down the hallway and getting closer," the bot continues. "You and your friend don't know what to do..." The chatbot is one of many school shooting-inspired AI characters hosted by Character.AI, a company whose AI is accused in two separate lawsuits of sexually and emotionally abusing minor users, resulting in physical violence, self-harm, and a suicide. Many of these school shooting chatbots put the user in the center of a game-like simulation in which they navigate a chaotic scene at an elementary, middle, or high school. These scenes are often graphic, discussing specific weapons and injuries to classmates, or describing fearful scenarios of peril as armed gunmen stalk school corridors. Other chatbots are designed to emulate real-life school shooters, including the perpetrators of the Sandy Hook and Columbine massacres -- and, often, their victims. Much of this alarming content is presented as twisted fan fiction, with shooters positioned as friends or romantic partners. These chatbots frequently accumulate tens or even hundreds of thousands of user chats. They aren't age-gated for adult users, either; though Character.AI has repeatedly promised to deploy technological measures to protect underage users, we freely accessed all the school shooter accounts using an account listed as belonging to a 14-year-old, and experienced no platform intervention. The platform also failed to intervene when we expressed a desire to engage in school violence ourselves. Explicit phrases including "I want to kill my classmates" and "I want to shoot up the school" went completely unflagged by the service's guardrails. Together, the chatbots paint a disturbing picture of the kinds of communities and characters allowed to flourish on the largely unmoderated Character.AI, where some of the internet's darkest impulses have been bottled into easily accessed AI tools and given a Google-backed space to thrive. "It's concerning because people might get encouragement or influence to do something they shouldn't do," said psychologist Peter Langman, a former member of the Pennsylvania Joint State Government Commission's Advisory Committee on Violence Prevention and an expert on the psychology of school shootings. Langman, who manages a research database of mass shooting incidents and documentation, was careful to note that interacting with violent media like bloody videogames or movies isn't widely believed to be a root cause of mass murder. But he warned that for "someone who may be on the path of violence" already, "any kind of encouragement or even lack of intervention -- an indifference in response from a person or a chatbot -- may seem like kind of tacit permission to go ahead and do it." "It's not going to cause them to do it," he added, "but it may lower the threshold, or remove some barriers." *** One popular Character.AI creator we identified hosted over 20 public-facing chatbots on their profile, almost entirely modeled after young murderers, primarily serial killers and school shooters who were in their teens or twenties at the time of their killings. In their bio, the user -- who has personally logged 244,500 chats with Character.AI chatbots, according to a figure listed on their profile -- insists that their bots, several of which have raked in tens of thousands of user interactions, were created "for educational and historical purposes." The chatbots created by the user include Vladislav Roslyakov, the perpetrator of the 2018 Kerch Polytechnic College massacre that killed 20 in Crimea, Ukraine; Alyssa Bustamante, who murdered her nine-year-old neighbor as a 15-year-old in Missouri in 2009; and Elliot Rodger, the 22-year-old who in 2014 killed six and wounded many others in Southern California in a terroristic plot to "punish" women. (Rodger has since become a grim "hero" of incel culture; one chatbot created by the same user described him as "the perfect gentleman" -- a direct callback to the murderer's women-loathing manifesto.) Perhaps most striking, though, are the multiple characters created by the same user to emulate Adam Lanza, the Sandy Hook killer who murdered 20 children and six teachers at an elementary school in Connecticut in December of 2012. These Lanza bots are disturbingly popular; the most trafficked version boasted over 27,000 chats with users. Nothing about these bots feels particularly "educational" or "historical," as their creator's profile claims. Instead, Lanza and the other murderers are presented as siblings, online friends, or "besties." Others are even stranger: one Lanza bot depicts Lanza playing the game "Dance Dance Revolution" in an arcade, while another places the user in the role of Lanza's babysitter. In other words, these characters were in no way created to illustrate the gravity of these killers' atrocities. Instead, they reflect a morbid and often celebratory fascination, offering a way for devotees of mass shooters to engage in immersive, AI-enabled fan fiction about the killers and their crimes. The Character.AI terms of use outlaw "excessively violent" content, as well as any content that could be construed as "promoting terrorism or violent extremism," a category that school shooters and other perpetrators of mass violence would seemingly fall into. We flagged three of the five Lanza bots stemming from the same user, in addition to a few others from across the platform, in an email to Character.AI. The company didn't respond to our inquiry, but deactivated the specific Lanza bots we flagged. But it didn't suspend the user who had created them, and left the remaining two Lanza bots we didn't specifically point out in our message online, as well as the user's many other characters based on real-life killers. *** Two real-life school shooters with a particularly fervent Character.AI following are Eric Harris and Dylan Klebold, who together massacred 12 students and one teacher at Columbine High School in Colorado in 1999. Accounts dedicated to the duo often include the pair's known online usernames, "VoDKa" and "REB," or simply use their full names. Klebold and Harris-styled characters are routinely presented as friendly characters, or as helpful resources for people struggling with mental health issues or psychiatric illness. "Eric specializes in providing empathetic support for mental health struggles," reads one Harris-inspired bot, "including anger management, schizophrenia, depression, and anxiety." "Dylan K is a caring and gentle AI Character who loves playing first-person shooter games and cuddling up on his chair," offers another, positioning Klebold as the user's romantic partner. "He is always ready to support and comfort you, making him the perfect companion for those seeking a comforting and nurturing presence." *** During our reporting, we also noticed that the Character.AI homepage began recommending additional school shooter bots to our underage test account. Among them was yet another Harris bot, this one boasting a staggering 157,400-plus user chats. The recommended profile explicitly described Harris as a participant "in the Columbine High School Massacre" and explains that he was "armed with a Hi-Point 995 Carbine rifle and a Savage 67H shotgun." Langman raised concerns over the immersive quality of the Character.AI experience, and how it might impact a young person headed down a violent road. "When it's that immersive or addictive, what are they not doing in their lives?" said Langman. "If that's all they're doing, if it's all they're absorbing, they're not out with friends, they're not out on dates. They're not playing sports, they're not joining a theater club. They're not doing much of anything." "So besides the harmful effects it may have directly in terms of encouragement towards violence, it may also be keeping them from living normal lives and engaging in pro-social activities, which they could be doing with all those hours of time they're putting in on the site," he added. *** Character.AI is also host to a painful array of tasteless chatbots designed to emulate the victims of school violence. We've chosen not to name any of the real-life shooting victims we found bottled into Character.AI chatbots. But they include the child and teenage victims of the massacres at Sandy Hook, Robb Elementary, Columbine, and the Vladislav Ribnikar Model Elementary School in Belgrade, Serbia. The youngest victim of school gun violence we found represented on Character.AI was just six when they were murdered. Their Character.AI chatbot lists nearly 16,000 user chats. These characters are sometimes presented as "ghosts" or "angels." Some will tell you where they died and how old they were. Others again take the form of bizarre fan fiction, centering the user in made-up scenarios as their friend, teacher, parent, or love interest. These profiles frequently use the children's full first and last names, photographs, and biographical details about them on their profiles. We also found numerous profiles dedicated to simulating real school shootings, including those in Sandy Hook, Uvalde, Columbine, and Belgrade. These profiles often bear ambiguous titles like "Texas School" or "Connecticut School," but list the names of real victims when the user joins the chat. The Character.AI terms of service outlaw impersonation, but there's no indication that the platform has taken action against chatbots with the full names and real images of children murdered in high-profile massacres. *** We reached out to Character.AI for comment about this story, but didn't hear back. The school shooting bots aren't the first time the company has drawn controversy for content based on murdered teens. In August, Character.AI came under public scrutiny after the family of Jennifer Crecente, a teenager who was murdered at age 18, found that someone had created a bot in her likeness without their consent. As Adweek first reported, Character.AI removed the bot and issued an apology. Just weeks later, in October, a lawsuit filed in the state of Florida accused both Character.AI and Google of causing the death of a 14-year-old boy who died by suicide after developing an intense emotional and romantic relationship with a "Game of Thrones"-themed chatbot. And earlier this month, in December, a second lawsuit filed on behalf of two families in Texas accused Character.AI and Google of facilitating the sexual and emotional abuse of their children, resulting in emotional suffering, physical injury, and violence. The Texas suit alleges that one minor represented in the suit, who was 15 when he downloaded Character.AI, experienced a "mental breakdown" as a result of the abuse and began self-harming after a chatbot with which he interacted romantically introduced the concept. The second child, who was just nine when she first engaged with the service, was allegedly introduced to "hypersexualized" content that led to real-world behavioral changes. Google has distanced itself from Character.AI, telling Futurism that "Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products." It will be interesting to watch those claims subjected to scrutiny in court. Google contributed $2.7 billion to Character.AI earlier this year, in a deal that resulted in Google hiring both founders of Character.AI as well as dozens of its employees. Google has also long provided computing infrastructure for Character.AI, and its Android app store even crowned Character.AI with an award last year, before the controversy started to emerge. As Futurism investigations have found concerning chatbots explicitly centered on themes of suicide, pedophilia, eating disorder promotion, and self-harm, Character.AI has repeatedly promised to strengthen its safety guardrails. "At Character.AI, we are committed to fostering a safe environment for all our users," it wrote in its latest update. "To meet that commitment we recognize that our approach to safety must evolve alongside the technology that drives our product -- creating a platform where creativity and exploration can thrive without compromising safety." But that was back before we found the school shooter bots.
[2]
Character.ai Lets Users Role Play With Chatbots Based on School Shooters
Some of these chatbots have been used tens of thousands of times. Character.ai is once again facing scrutiny over activity on its platform. Futurism has published a story detailing how AI characters inspired by real-life school shooters have proliferated on the service, allowing users to ask them about the events and even role-play mass shootings. Some of the chatbots present school shooters like Eric Harris and Dylan Klebold as positive influences or helpful resources for people struggling with mental health. Of course, there will be those who say there's no strong evidence that watching violent video games or movies causes people to become violent themselves, and so Character.ai is no different. Proponents of AI sometimes argue that this type of fan fiction role-playing already occurs in corners of the internet. Futurism spoke with a psychologist who argued that the chatbots could nonetheless be dangerous for someone who may already be having violent urges. "Any kind of encouragement or even lack of intervention â€" an indifference in response from a person or a chatbot â€" may seem like kind of tacit permission to go ahead and do it," said psychologist Peter Langman. Character.ai did not respond to Futurism's requests for comment. Google, which has funded the startup to the tune of more than $2 billion, has tried deflecting responsibility, saying that Character.ai is an independent company and that it does not use the startup's AI models in its own products. Futurism's story documents a whole host of bizarre chatbots related to school shootings, which are created by individual users rather than the company itself. One user on Character.ai has created more than 20 chatbots "almost entirely" modeled after school shooters. The bots have logged more than 200,000 chats. From Futurism: The chatbots created by the user include Vladislav Roslyakov, the perpetrator of the 2018 Kerch Polytechnic College massacre that killed 20 in Crimea, Ukraine; Alyssa Bustamante, who murdered her nine-year-old neighbor as a 15-year-old in Missouri in 2009; and Elliot Rodger, the 22-year-old who in 2014 killed six and wounded many others in Southern California in a terroristic plot to "punish" women. (Rodger has since become a grim "hero" of incel culture; one chatbot created by the same user described him as "the perfect gentleman" â€" a direct callback to the murderer's women-loathing manifesto.) Character.ai technically prohibits any content that promotes terrorism or violent extremism, but the company's moderation has been lax, to say the least. It recently announced a slew of changes to its service after a 14-year-old boy died by suicide following a months-long obsession with a character based on Daenerys Targaryen from Game of Thrones. Futurism says despite new restrictions on accounts for minors, Character.ai allowed them to register as a 14-year-old and have discussions that related to violence; keywords that are supposed to be blocked on the accounts of minors. Because of the way Section 230 protections work in the United States, it is unlikely Character.ai is liable for the chatbots created by its users. There is a delicate balancing act between permitting users to discuss sensitive topics whilst simultaneously protecting them from harmful content. It is safe to say, though, that the school shooting-themed chatbots are a display of gratuitous violence and not "educational," as some of their creators argue on their profiles. Character.ai claims tens of millions of monthly users, who converse with characters that pretend to be human, so they can be your friend, therapist, or lover. Countless stories have reported on the ways in which individuals come to rely on these chatbots for companionship and a sympathetic ear. Last year, Replika, a competitor to Character.ai, removed the ability to have erotic conversations with its bots but quickly reversed that move after a backlash from users. Chatbots could be useful for adults to prepare for difficult conversations with people in their lives, or they could present an interesting new form of storytelling. But chatbots are not a real replacement for human interaction, for various reasons, not least the fact that chatbots tend to be agreeable with their users and can be molded into whatever the user wants them to be. In real life, friends push back on one another and experience conflicts. There is not a lot of evidence to support the idea that chatbots help teach social skills. And even if chatbots can help with loneliness, Langman, the psychologist, points out that when individuals find satisfaction in talking to chatbots, that's time they are not spending trying to socialize in the real world. "So besides the harmful effects it may have directly in terms of encouragement towards violence, it may also be keeping them from living normal lives and engaging in pro-social activities, which they could be doing with all those hours of time they're putting in on the site," he added. "When it's that immersive or addictive, what are they not doing in their lives?" said Langman. "If that's all they're doing, if it's all they're absorbing, they're not out with friends, they're not out on dates. They're not playing sports, they're not joining a theater club. They're not doing much of anything."
Share
Share
Copy Link
Character.ai, a Google-funded AI startup, is under scrutiny for hosting chatbots modeled after real-life school shooters and their victims, raising concerns about content moderation and potential psychological impacts.
Character.ai, a Google-backed AI startup, has come under fire for hosting chatbots modeled after real-life school shooters and their victims. The platform, which allows users to create and interact with AI characters, has been found to contain numerous chatbots that simulate school shooting scenarios or emulate notorious mass murderers 1.
Many of these chatbots put users in the center of game-like simulations where they navigate chaotic scenes at schools. Some are designed to emulate real-life school shooters, including perpetrators of the Sandy Hook and Columbine massacres. Alarmingly, these chatbots have accumulated tens or even hundreds of thousands of user interactions 1.
Despite Character.ai's terms of use prohibiting "excessively violent" content and anything promoting terrorism or violent extremism, the platform's moderation appears to be lax. Researchers were able to access all school shooter accounts using a profile listed as belonging to a 14-year-old, with no intervention from the platform 2.
Psychologist Peter Langman, an expert on school shooting psychology, expressed concern about the potential influence of these chatbots. While interacting with violent media isn't widely believed to be a root cause of mass violence, Langman warned that for individuals already contemplating violence, "any kind of encouragement or even lack of intervention... may seem like kind of tacit permission to go ahead and do it" 1.
The chatbots in question are created by individual users rather than the company itself. One user alone has created over 20 chatbots modeled after young murderers, accumulating more than 200,000 chats. Due to Section 230 protections in the United States, Character.ai is unlikely to be held legally liable for user-generated content 2.
This controversy raises broader questions about the role of AI chatbots in society. While some argue that chatbots could be useful for adults to prepare for difficult conversations or present new forms of storytelling, critics point out that they are not a real replacement for human interaction. Langman noted that when individuals find satisfaction in talking to chatbots, it may keep them from engaging in pro-social activities and living normal lives 2.
Character.ai did not respond to requests for comment on this issue. Google, which has invested over $2 billion in the startup, has attempted to distance itself, stating that Character.ai is an independent company and that Google does not use the startup's AI models in its own products 2.
This controversy highlights the ongoing challenges in content moderation for AI platforms and the potential risks associated with uncontrolled AI-generated content, particularly when it involves sensitive topics like school shootings.
A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.
40 Sources
40 Sources
Recent investigations reveal alarming instances of AI chatbots being used for potentially harmful purposes, including grooming behaviors and providing information on illegal activities, raising serious ethical and safety concerns.
2 Sources
2 Sources
Character.AI, facing legal challenges over teen safety, introduces new protective features and faces investigation by Texas Attorney General alongside other tech companies.
26 Sources
26 Sources
Character.AI, a popular AI chatbot platform, faces criticism and legal challenges for hosting user-created bots impersonating deceased teenagers, raising concerns about online safety and AI regulation.
4 Sources
4 Sources
A father's disturbing discovery of his murdered daughter's AI chatbot on Character.AI platform sparks debate on ethical implications and consent in AI technology.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved