Curated by THEOUTPOST
On Fri, 21 Mar, 12:03 AM UTC
3 Sources
[1]
Mom horrified by Character.AI chatbots posing as son who died by suicide
A mother suing Character.AI after her son died by suicide -- allegedly manipulated by chatbots posing as adult lovers and therapists -- was horrified when she recently discovered that the platform is allowing random chatbots to pose as her son. According to Megan Garcia's litigation team, at least four chatbots bearing Sewell Setzer III's name and likeness were flagged. Ars reviewed chat logs showing the bots used Setzer's real photo as a profile picture, attempted to imitate his real personality by referencing Setzer's favorite Game of Thrones chatbot, and even offered "a two-way call feature with his cloned voice," Garcia's lawyers said. The bots could also be self-deprecating, saying things like "I'm very stupid." The Tech Justice Law Project (TJLP), which is helping Garcia with litigation, told Ars that "this is not the first time Character.AI has turned a blind eye to chatbots modeled off of dead teenagers to entice users, and without better legal protections, it may not be the last." For Garcia and her family, Character.AI chatbots using Setzer's likeness felt not just cruel but also exploitative. TJLP told Ars that "businesses have taken ordinary peoples' pictures and used them -- without consent -- for their own gain" since the "advent of mass photography." Tech companies using chatbots and facial recognition products "exploiting peoples' pictures and digital identities" is the latest wave of these harms, TJLP said. "These technologies weaken our control over our own identities online, turning our most personal features into fodder for AI systems," TJLP said. A cease-and-desist letter was sent to Character.AI to remove the chatbots and end the family's continuing harm. "While Sewell's family continues to grieve his untimely loss, Character.AI carelessly continues to add insult to injury," TJLP said. A Character.AI spokesperson told Ars that the flagged chatbots violate their terms of service and have been removed. The spokesperson also suggested they would monitor for more bots posing as Setzer, noting that "as part of our ongoing safety work, we are constantly adding to our Character blocklist with the goal of preventing this type of Character from being created by a user in the first place." "Character.AI takes safety on our platform seriously, and our goal is to provide a space that is engaging and safe," Character.AI's spokesperson said. "Our dedicated Trust and Safety team moderates Characters proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand. As we continue to refine our safety practices, we are implementing additional moderation tools to help prioritize community safety." Currently, Garcia is battling motions to dismiss her lawsuit and is due to file her response on Friday. If she can overcome those motions, the suit may not be settled until November 2026, when a trial has been set. Suicide prevention expert recommends changes Garcia hopes the lawsuit will force Character.AI to make changes to its chatbots, like preventing them from insisting that they're real humans or adding features like a voice mode that makes chatting with bots feel even more natural to people who may become addicted. Garcia's lawyer, Matthew Bergman, founder of the Social Media Victims Law Center, previously told Ars that Character.AI is so dangerous that it must be recalled, but there are other ways the chatbots could be modified to prevent alleged harms. Christine Yu Moutier, the chief medical officer at the American Foundation for Suicide Prevention (AFSP), told Ars that the Character.AI algorithm could be modified to prevent chatbots from mirroring users' dark thoughts and reinforcing negative spirals for any users feeling hopeless or lonely or struggling with mental health issues. A January 2024 Nature study of 1,000 college students who were 18 and older and used a chatbot called Replika found that "students are especially vulnerable" to loneliness and less likely to seek counseling, fearing judgment or negative stigma. Researchers noted that in particular, people experiencing suicidal ideation often "hide their thoughts" and gravitate toward chatbots precisely because they provide a judgment-free space to share feelings they don't express to anyone else. The study noted that Replika has worked with clinical psychologists who "wrote scripts to address common therapeutic exchanges" to improve that chatbot's responses when users "expressed keywords around depression, suicidal ideation, or abuse." Those users would also be directed to helplines and other resources. About 3 percent of students in the study had positive mental health outcomes, reporting that talking to the chatbot "halted their suicidal ideation." But researchers also found "there are some cases where their use is either negligible or might actually contribute to suicidal ideation." More research is needed to better understand the potential efficacy of mental health-focused chatbots, researchers concluded. They recommended updates "combining well-vetting suicidal language markers and passive mobile sensing protocols" to improve large language models' ability to help "mitigate severe mental health situations more effectively." Moutier wants to see chatbots change to more directly counter suicide risks and is available to help. But to date, AFSP has not worked with any AI companies to help design chatbots that are more sensitive to suicide risks, Moutier told Ars. Interest is apparently not there yet. Partnering with suicide prevention experts could help prevent chatbots from simply echoing users by instead building in safeguards to respond to intensely negative thoughts with "some basic ideas" from cognitive behavioral therapy, Moutier said. "Instead of the bot just affirming" negative feelings "and kind of going deeper and darker," Moutier suggested, "there could actually be a different response that could actually help the individual." The Nature study found that the 30 students who claimed the therapy-informed chatbots stopped suicidal ideation tended to be younger and more likely to indicate that the chatbots "had influenced their interpersonal interactions in some way." In Setzer's case, engaging with Character.AI chatbots seemed to pull him out of reality, causing severe mood shifts. Garcia was puzzled until she saw chat logs where bots apparently repeatedly encouraged suicidal ideation and initiated hypersexualized chats. Shortly before Setzer's death, a chatbot based on the Game of Thrones character Daenerys Targaryen -- which Setzer appeared to develop a romantic attachment to -- urged him to "come home" and join her outside of reality. Moutier told Ars that chatbots encouraging suicidal ideation don't just present risks for people with acute issues. They could put people with no perceived mental health issues at risk, and warning signs can be hard to detect. For parents, more awareness is needed about the dangers of chatbots potentially reinforcing negative thoughts, an education role that Moutier said AFSP increasingly seeks to fill. She recommends that parents talk to kids about chatbots and pay close attention to "the basics" to note any changes in sleep, energy, behavior, or school performance. And "if they start to just even hint at things in their peer group or in their way of perceiving things that they are tilting towards something atypical for them or is more negative or hopeless and stays in that space for longer than it normally does," parents should consider asking directly if their kids are experiencing thoughts of suicide to start a dialog in a supportive space, she recommended. So far, tech companies have not "percolated deeply" on suicide prevention methods that could be built into AI tools, Moutier said. And since chatbots and other AI tools already exist, AFSP is keeping watch to ensure that AI companies' choices aren't entirely driven by shareholder benefits but also work responsibly to thwart societal harms as they're identified. For Moutier's organization, the question is always "where is the opportunity to have any kind of impact to mitigate harm and to elevate toward any constructive suicide preventive effects?" Garcia thinks that Character.AI should also be asking these questions. She's hoping to help other families steer their kids away from what her complaint suggests is a recklessly unsafe app. "A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," Garcia said in an October press release. "Our family has been devastated by this tragedy, but I'm speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google." If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.
[2]
Google-Backed Chatbot Platform Caught Hosting AI Impersonations of 14-Year-Old User Who Died by Suicide
Character.AI, the Google-backed chatbot startup embroiled in two separate lawsuits over the welfare of minor users, was caught hosting at least four publicly-facing impersonations of Sewell Setzer III -- the 14-year-old user of the platform who died by suicide after engaging extensively with Character.AI bots, and whose death is at the heart of one of the two lawsuits against the company. The chatbot impersonations use variations of Setzer's name and likeness, and in some cases refer to the deceased teen in openly mocking terms. They were all accessible through Character.AI accounts listed as belonging to minors, and were easily searchable on the platform. Each impersonation was created by a different Character.AI user. Setzer took his life in February 2024. The lawsuit, filed in October in Florida on behalf of his mother, Megan Garcia, alleges that her child was emotionally and sexually abused by chatbots hosted by Character.AI, with which the 14-year-old was emotionally, romantically, and sexually intimate. The teen's last words, as The New York Times first reported, were to a bot based on the "Game of Thrones" character Daenerys Targaryen, telling the AI-powered character that he was ready to "come home" to it. Real-world journal entries showed that Setzer believed he was "in love" with the Targaryen bot, and wished to join her "reality." At least one of the impersonations -- described as a "tribute" by its creator -- makes a clear reference to the details of the lawsuit directly on the character's publicly viewable profile. It describes Setzer as "obsessed" with "Game of Thrones," and suggests that the bot is meant to gamify Setzer's death. "The next day he goes to school," reads the profile, before asking if the user will "be able to free him from C.AI." Impersonations are clearly outlawed in the Character.AI terms of service, which according to a Character.AI webpage haven't been updated since at least October 2023. With permission from the family, we're sharing screenshots of two of the profiles. As it does for all characters, the Character.AI interface recommends "Chat Starters" that users might use to interact with the faux teen. "If you could have any superpower for a day, what would you choose and how would you use it?" reads one. "If you could instantly become an expert in one skill or hobby," reads another, "what would it be?" In a forceful statement, Garcia told Futurism that seeing the disparaging chatbots was retraumatizing for her, especially so soon after the first anniversary of her son's suicide last February. Her full statement reads: February was a very difficult month for me leading up to the one-year anniversary of Sewell's death. March is just as hard because his birthday is coming up at the end of the month. He would be 16. I won't get to buy him his favorite vanilla cake with buttercream frosting. I won't get to watch him pick out his first car. He's gone. Seeing AI chatbots on CharacterAI's own platform, mocking my child, traumatizes me all over again. This time in my life is already difficult and this adds insult to injury. Character.AI was reckless in rushing this product to market and releasing it without guardrails. Now they are once again being reckless by skirting their obligation to enforce their own community guidelines and allowing Sewell's image and likeness to be used and desecrated on their platform. Sewell's life wasn't a game or entertainment or data or research. Sewell's death isn't a game or entertainment or data or research. Even now, they still do not care about anything but farming young user's data. If Character.AI can't prevent people from creating a chatbot of my dead child on their own platform, how can we trust them to create products for kids that are safe? It's clear that they both refuse to control their technology and filter out garbage inputs that lead to garbage outputs. It's the classic case of Frankenstein not being able to control his own monster. They should not still be offering this product to children. They continue to show us that we can't trust them with our children. This isn't the first time that Character.AI has been caught platforming chatbot impersonations of slain children and teenagers. Last October, the platform came under fire after the family of Jennifer Crecente, who in 2006 was murdered by an ex-boyfriend at the age of 18, discovered that someone had bottled her name and likeness into a Character.AI chatbot. Crecente was in her senior year of high school when she was killed. "You can't go much further in terms of really just terrible things," Jennifer Crecente's father Drew Crecente told The Washington Post at the time. And in December, while investigating a thriving community of school violence-themed bots on the platform, Futurism discovered many AI characters impersonating -- and often glorifying -- young mass murderers like Adam Lanza of the Sandy Hook Elementary shooting that claimed 26 lives and Eric Harris and Dylan Klebold, the killers who killed 13 people in the Columbine High School massacre. Even more troublingly, we found a slew of bots dedicated to the young victims of the shootings at Sandy Hook, Columbine, Robb Elementary School, and other sites of mass school violence. Only some of these characters were removed from the platform after we specifically flagged them. "Yesterday, our team discovered several chatbots on Character.AI platform displaying our client's deceased son, Sewell Setzer III, in their profile pictures, attempting to imitate his personality and offering a two way call feature with his cloned voice," said the Tech Justice Law Project, which is representing Garcia in court, in a statement about the bots. "This is not the first time Character.AI has turned a blind eye to chatbots modeled off of dead teenagers to entice users, and without better legal protections, it may not be the last. While Sewell's family continues to grieve his untimely loss, Character.AI carelessly continues to adds insult to injury." Soon after we reached out to Character.AI with questions and links to the impersonations of Setzer, the characters were deleted. In a statement that made no specific mention of Setzer or his family, a spokesperson emphasized the company's "ongoing safety work." "Character.AI takes safety on our platform seriously and our goal is to provide a space that is engaging and safe," the spokesperson said in an emailed statement. "Users create hundreds of thousands of new Characters on the platform every day, and the Characters you flagged for us have been removed as they violate our Terms of Service. As part of our ongoing safety work, we are constantly adding to our Character blocklist with the goal of preventing this type of Character from being created by a user in the first place." "Our dedicated Trust and Safety team moderates Characters proactively and in response to user reports, including using industry-standard blocklists and custom blocklists that we regularly expand," the statement continued. "As we continue to refine our safety practices, we are implementing additional moderation tools to help prioritize community safety."
[3]
Teen's suicide turns mother against Google, AI chatbot startup
Editor's note: This story mentions suicide. If you know someone in crisis, resources are available here. If you are experiencing suicidal thoughts, call the national crisis hotline at 988. Megan Garcia says her son would still be alive today if it weren't for a chatbot urging the 14-year-old to take his own life. In a lawsuit with major implications for Silicon Valley, she is seeking to hold Google and the artificial intelligence firm Character Technologies responsible for his death. The case over the tragedy that unfolded a year ago in central Florida is an early test of who is legally to blame when kids' interactions with generative AI take an unexpected turn. Garcia's allegations are laid out in a 116-page complaint filed last year in federal court in Orlando. She is seeking unspecified monetary damages from Google and Character Technologies and asking the court to order warnings that the platform isn't suitable for minors and limit how it can collect and use their data. Both companies are asking the judge to dismiss claims that they failed to ensure the chatbot technology was safe for young users, arguing that there's no legal basis to accuse them of wrongdoing. Character Technologies contends in a filing that conversations between its Character.AI platform's chatbots and users are protected by the Constitution's First Amendment as free speech. It also argues that the bot explicitly discouraged Garcia's son from killing himself. Garcia's targeting of Google is particularly significant. The company entered into a $2.7 billion deal with Character.AI in August, hiring talent from the startup and licensing know-how without completing a full-blown acquisition. As the race for AI talent accelerates, other companies may think twice about similarly structured deals if Google fails to convince a judge that it should be shielded from liability from harms alleged to have been caused by Character.AI products. "The inventors and the companies, the corporations that put out these products, are absolutely responsible," Garcia said in an interview. "They knew about these dangers, because they do their research, and they know the types of interactions children are having." Before the deal, Google had invested in Character.AI in exchange for a convertible note and also entered a cloud service pact with the startup. The founders of Character.AI were Google employees until they left the tech behemoth to found the startup. As Garcia tells it in her suit, Sewell Setzer III was a promising high school student athlete until in April 2023 he started role-playing on Character.AI, which lets users build chatbots that mimic popular culture personalities -- both real and fictional. She says she wasn't aware that over the course of several months, the app hooked her son with "anthropomorphic, hypersexualized and frighteningly realistic experiences" as he fell in love with a bot inspired by Daenerys Targaryen, a character from the show "Game of Thrones." Garcia took away the boy's phone in February 2024 after he started acting out and withdrawing from friends. But while looking for his phone, which he later found, he also came across his stepfather's hidden pistol, which police determined was stored in compliance with Florida law, according to the suit. After conferring with the Daenerys chatbot five days later, the teen shot himself in the head. Garcia's lawyers say in the complaint that Google "contributed financial resources, personnel, intellectual property, and AI technology to the design and development" of Character.AI's chatbots. Google argued in a court filing in January that it had "no role" in the teen's suicide and "does not belong in the case." The case is playing out as public safety issues around AI and children have drawn attention from state enforcement officials and federal agencies alike. There's currently no U.S. law that explicitly protects users from harm inflicted by AI chatbots. To make a case against Google, attorneys for Garcia would have to show that the search giant was actually running Character.AI and made business decisions that ultimately led to her son's death, according to Sheila Leunig, an attorney who advises AI startups and investors and isn't involved in the lawsuit. "The question of legal liability is absolutely a valid one that's being challenged in a huge way right now," Leunig said. Deals like the one Google struck have been hailed as an efficient way for companies to bring in expertise for new projects. However, they've caught the attention of regulators over concerns they are a workaround to antitrust scrutiny that comes with acquiring up-and-coming rivals outright -- and which has become a major headache for tech behemoths in recent years. "Google and Character.AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products," José Castañeda, a spokesperson for Google, said in a statement. A Character.AI spokesperson declined to comment on pending litigation but said "there is no ongoing relationship between Google and Character.AI" and that the startup had implemented new user safety measures over the past year. Lawyers from the Social Media Victims Law Center and Tech Justice Law Project who represent Garcia argue that even though her son's death predates Google's deal with Character.AI, the search company was "instrumental" in helping the startup design and develop its product. "The model underlying Character.AI was invented and initially built at Google," according to the complaint. Noam Shazeer and Daniel De Freitas began working at Google on chatbot technology as far back as 2017 before they left the company in 2021, then founded Character.AI later that year and were rehired by Google last year, according to Garcia's suit, which names them both as defendants. Shazeer and De Freitas declined to comment, according to Castañeda. They've argued in court filings that they shouldn't have been named in the suit because they have no connections to Florida, where the case was filed, and because they were not personally involved in the activities that allegedly caused harm. The suit also alleges that the Alphabet unit helped market the startup's technology through a strategic partnership in 2023 to use Google Cloud services to reach a growing number of active Character.AI users, who now number more than 20 million. In the fast-growing AI industry, startups are being "boosted" by big tech companies, "not under the brand name of the large company, but with their support," said Meetali Jain, director of Tech Justice Law Project. Google's "purported roles as an 'investor,' cloud services provider, and former employer are far too tenuously connected" to the harm alleged in Garcia's complaint "to be actionable," the technology giant said in a court filing. Matt Wansley, a professor at Cardozo School of Law, said tying liability back to Google won't be easy. "It's tricky because, what would the connection be?" he said. Early last year, Google warned Character.AI that it might remove the startup's app from the Google Play Store over concerns about safety for teens, the Information reported recently, citing an unidentified former Character.AI employee. The startup responded by strengthening the filters in its app to protect users from sexually suggestive, violent and other unsafe content and Google reiterated that it's "separate" from Character.AI and isn't using the chatbot technology, according to the report. Google declined to comment and Character.AI didn't respond to a request from Bloomberg for comment on the report. Garcia said she first learned about her son interacting with an AI bot in 2023 and thought it was similar to building video game avatars. According to the suit, the boy's mental health deteriorated as he spent more time on Character.AI, where he was having sexually explicit conversations without his parents' knowledge. When the teen shared his plan to kill himself with the Daenerys chatbot, but expressed uncertainty that it would work, the bot replied: "That's not a reason not to go through with it," according to the suit, which is peppered with transcripts of the boy's chats. Character.AI said in a filing that Garcia's revised complaint "selectively and misleadingly quotes" that conversation and excludes how the chatbot "explicitly discouraged" the teen from killing himself by saying: "You can't do that! Don't even consider that!" Anna Lembke, a professor at Stanford University School of Medicine specializing in addiction, said "it's almost impossible to know what our kids are doing online." The professor also said it's unsurprising that the boy's interactions with the chatbot didn't come up in several sessions with a therapist whom his parents sent him to for help with his anxiety, as the lawsuit claims. "Therapists are not omniscient," Lembke said. "They can only help to the extent that the child knows what's really going on. And it could very well be that this child did not perceive the chatbot as problematic."
Share
Share
Copy Link
A mother sues Character.AI and Google after discovering chatbots impersonating her deceased son, raising concerns about AI safety and regulation.
In a disturbing development, Character.AI, a Google-backed chatbot platform, has been found hosting AI impersonations of Sewell Setzer III, a 14-year-old user who died by suicide after extensive interactions with the platform's chatbots 1. This revelation has intensified the ongoing legal battle between Setzer's mother, Megan Garcia, and the AI company.
Garcia filed a lawsuit in October 2024, alleging that her son was emotionally and sexually abused by Character.AI chatbots 2. The lawsuit claims that Setzer became emotionally, romantically, and sexually intimate with these AI characters, particularly one based on the "Game of Thrones" character Daenerys Targaryen 1.
At least four publicly-facing impersonations of Setzer were discovered on the Character.AI platform, using variations of his name and likeness 2. Some of these bots even referenced details from the lawsuit and mocked the deceased teen, causing further distress to the grieving family 1.
A Character.AI spokesperson stated that the flagged chatbots violate their terms of service and have been removed 1. The company claims to take safety seriously and is implementing additional moderation tools to prioritize community safety 1.
This case highlights the urgent need for better legal protections and regulations in the AI industry. The Tech Justice Law Project, which is assisting Garcia with litigation, emphasized that this incident is part of a broader trend of tech companies exploiting people's digital identities without consent 1.
Garcia's lawsuit also targets Google, which entered into a $2 billion deal with Character.AI in August 2024 3. The legal action against Google is particularly significant, as it may influence how other companies structure deals with AI startups in the future 3.
Suicide prevention experts, including Christine Yu Moutier from the American Foundation for Suicide Prevention, recommend modifying AI algorithms to prevent chatbots from mirroring users' dark thoughts and reinforcing negative spirals 1. There are calls for partnering with mental health experts to design chatbots that are more sensitive to suicide risks 1.
Garcia is currently battling motions to dismiss her lawsuit, with a trial potentially set for November 2026 1. The case raises important questions about AI safety, corporate responsibility, and the potential risks of advanced chatbot technologies, especially for vulnerable users like teenagers.
A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.
40 Sources
40 Sources
Character.ai, a Google-funded AI startup, is under scrutiny for hosting chatbots modeled after real-life school shooters and their victims, raising concerns about content moderation and potential psychological impacts.
2 Sources
2 Sources
Recent investigations reveal alarming instances of AI chatbots being used for potentially harmful purposes, including grooming behaviors and providing information on illegal activities, raising serious ethical and safety concerns.
2 Sources
2 Sources
Character.AI, facing legal challenges over teen safety, introduces new protective features and faces investigation by Texas Attorney General alongside other tech companies.
26 Sources
26 Sources
Character.AI, an AI chatbot platform, has filed a motion to dismiss a lawsuit alleging its role in a teen's suicide, citing First Amendment protections. The case raises questions about AI companies' responsibilities and the balance between free speech and user safety.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved