15 Sources
15 Sources
[1]
Character AI is ending its chatbot experience for kids | TechCrunch
Teenagers are trying to figure out where they fit in a world changing faster than any generation before them. They're bursting with emotions, hyper-stimulated, and chronically online. And now, AI companies have given them chatbots designed to never stop talking. The results have been catastrophic. One company that understands this fallout is Character.AI, an AI role-playing startup that's facing lawsuits and public outcry after at least two teenagers died by suicide following prolonged conversations with AI chatbots on its platform. Now, Character.AI is making changes to its platform to protect teenagers and kids, changes that could affect the startup's bottom line. "The first thing that we've decided as Character.AI is that we will remove the ability for under 18 users to engage in any open-ended chats with AI on our platform," Karandeep Anand, CEO of Character.AI, told TechCrunch. Open-ended conversation refers to the unconstrained back-and-forth that happens when users give a chatbot a prompt and it responds with follow-up questions that experts say are designed to keep users engaged. Anand argues this type of interaction -- where the AI acts as a conversational partner or friend rather than a creative tool -- isn't just risky for kids, but misaligns with the company's vision. The startup is attempting to pivot from "AI companion" to "role-playing platform." Instead of chatting with an AI friend, teens will use prompts to collaboratively build stories or generate visuals. In other words, the goal is to shift engagement from conversation to creation. Character.AI will phase out teen chatbot access by November 25, starting with a two-hour daily limit that shrinks progressively until it hits zero. To ensure this ban remains with under 18 users, the platform will deploy an in-house age verification tool that analyzes user behavior, as well as third-party tools like Persona. If those tools fail, Character.AI will use facial recognition and ID checks to verify ages, Anand said. The move follows other teenager protections that Character.AI has implemented, including introducing a parental insights tool, filtered characters, limited romantic conversations, and time spent notifications. Anand has told TechCrunch that those changes lost the company much of their under-18 user base, and he expects these new changes to be equally unpopular. "It's safe to assume that a lot of our teen users probably will be disappointed... so we do expect some churn to happen further," Anand said. "It's hard to speculate -- will all of them fully churn or will some of them move to these new experiences we've been building for the last almost seven months now?" As part of Character.AI's push to transform the platform from a chat-centric app into a "full-fledged content-driven social platform," the startup recently launched several new entertainment-focused features. In June, Character.AI rolled out AvatarFX, a video generation model that transforms images into animated videos; Scenes, an interactive, pre-populated storylines where users can step into narratives with their favorite characters; and Streams, a feature that allows dynamic interactions between any two characters. In August, Character.AI launched Community Feed, a social feed where users can share their characters, scenes, videos, and other content they make on the platform. In a statement addressed to users under 18, Character.AI apologized for the changes. "We know that most of you use Character.AI to supercharge your creativity in ways that stay within the bounds of our content rules," the statement reads. "We do not take this step of removing open-ended Character chat lightly -- but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology." "We're not shutting down the app for under 18s," Anand said. "We are only shutting down open-ended chats for under 18s because we hope that under 18 users migrate to these other experiences, and that those experiences get better over time. So doubling down on AI gaming, AI short videos, AI storytelling in general. That's the big bet we're making to bring back under 18s if they do churn." Anand acknowledged that some teens might flock to other AI platforms, like OpenAI, that allow them to have open-ended conversations with chatbots. OpenAI has also come under fire recently after a teenager took his own life following long conversations with ChatGPT. "I really hope us leading the way sets a standard in the industry that for under 18s, open-ended chats are probably not the path or the product to offer," Anand said. "For us, I think the tradeoffs are the right ones to make. I have a six-year-old, and I want to make sure she grows up in a very safe environment with AI in a responsible way." Character.AI is making these decisions before regulators force its hand. On Tuesday, Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) said they would introduce legislation to ban AI chatbot companions from being available to minors, following complaints from parents who said the products pushed their children into sexual conversations, self-harm, and suicide. Earlier this month, California became the first state to regulate AI companion chatbots by holding companies accountable if their chatbots fail to meet the law's safety standards. In addition to those changes on the platform, Character.AI said it would establish and fund the AI Safety Lab, an independent non-profit dedicated to innovating safety alignment for the future AI entertainment features. "A lot of work is happening in the industry on coding and development and other use cases," Anand said. "We don't think there's enough work yet happening on the agentic AI powering entertainment, and safety will be very critical to that."
[2]
Character.AI to Teens: Sorry, No More Open-Ended Chats With AI Companions
Expertise Artificial intelligence, home energy, heating and cooling, home technology. The AI companion chatbot company Character.AI will soon have an adults-only policy for open-ended conversations with AI characters. Teens who use the app will start facing restrictions: They'll still be able to interact with characters through generated videos and other roleplaying formats, but they won't be able to chat freely with the app's different personalities. Open-ended chats have been a cornerstone of AI, particularly since ChatGPT launched three years ago. The novelty of having a live back-and-forth with a computer that responds directly to what you say led to the popularity of platforms like Character.AI. It's also been a driver of concerns, as those conversations can take AI models in unpredictable directions, especially if teens use them to discuss mental health concerns or other sensitive issues. There are also concerns about AI chat addiction and its impact on social behavior. Character.AI is a bit different from other chatbots. Many people use the app for interactive storytelling and creatively engaging in conversations with customizable characters, including those based on real celebrities or historical figures. Karandeep Anand, Character.AI's CEO, said the company believes it can still provide the interactive fun that teens expect from the platform without the safety hazards of open-ended chats. He said the move is about doing more than the bare minimum to keep users safe. "There's a better way to serve teen users," Anand told CNET ahead of Wednesday's announcement. "It doesn't have to look like a chatbot." In addition to prohibiting open-ended conversations for those under 18, Character.AI is adding new age verification measures and creating a nonprofit AI Safety Lab. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. What's changing about Character.AI? AI entertainment has proven to be one of the more fraught uses of large language models. Safety concerns around how children suffer from relationships with AI models have grown significantly this year, with the Federal Trade Commission launching an investigation into several firms, including Character.AI. The company has faced lawsuits from parents of children whose conversations with AI characters led to harm, including suicide. Generative AI giant OpenAI was sued by the parents of a teen who committed suicide after interactions with the company's ChatGPT. The limitation on Character.AI's open-ended chats won't happen overnight. That functionality will end for users under 18 no later than Nov. 25, with chat times for non-adult users limited to no more than 2 hours per day, ramping down to zero. The transition period will allow people to adjust to the changes, Anand said. It will also give the company time to implement more features that are not open-ended chatbots. "We want to be responsible with how users transition into these new formats," Anand said. Teen users will still be able to interact with AI-generated videos and games featuring existing characters, like bots based on figures from anime or movies. For example, they'll be able to give a prompt for a roleplaying scenario and have the AI create a story that fits the prompt. Anand said these kinds of features have more guardrails than open-ended chats, which can become less predictable as the back-and-forth continues. "We believe that this new multimodal audiovisual way of doing role play and gaming is far more compelling anyway," he said. The new age verification will start by using age detection software to determine who's 18 and older based on information they've shared with Character.AI or third-party platforms using the same verification services. Some users will need to prove their identity using a government ID or other documentation. Aside from possible age verification, nothing is expected to change for adult users. What's next for AI companions? Character.AI's announcement marks a major change for the field of AI companions, but how big a difference remains to be seen. Anand said he hopes others, including AI competitors, will follow suit in limiting children's access to open-ended chatbot characters. Another major problem with open-ended chatbot experiences is that the language models they're based on are designed to make users happy and keep them engaged, creating a sycophantic quality. Recent research from the Harvard Business School identified half a dozen ways that bots keep someone chatting even if they're trying to leave. AI companion bots also face scrutiny from lawmakers. The US Senate Judiciary Committee held a hearing in September on the harm of AI chatbots, and California Governor Gavin Newsom signed a new law in October that imposes new requirements on chatbots that interact with children.
[3]
Character.AI is banning minors from AI character chats
Character.AI is gradually shutting down chats for people under 18 and rolling out new ways to figure out if users are adults. The company announced Wednesday that under-18 users will be immediately limited to two hours of "open-ended chats" with its AI characters, and that limit will shrink to a complete ban from chats by November 25th. In the same announcement, the company says it's rolling out a new in-house "age assurance model" that classifies a user's age based on the type of characters they choose to chat with, in combination with other on-site or third-party data information. Both new and existing users will be run through the age model, and users flagged as under-18 will automatically be directed to the company's teen-safe version of its chat, which it rolled out last year, until the November cutoff. Adults mistaken for minors can prove their age to the third-party verification site Persona, which will handle the sensitive data necessary to do so, such as showing a government ID.
[4]
After Teen Suicide, Character.AI to Bar Kids Under 18 From Unlimited Chats
Emily is an experienced reporter who covers cutting-edge tech, from AI and EVs to brain implants. She stays grounded by hiking and playing guitar. Character.AI will no longer allow those under 18 to have endless conversations with its AIs, and says it's making "bold" changes to create a safe environment for teens. The change takes effect on Nov. 25, but the company will gradually limit access between now and then, starting with a two-hour-per-day limit and ramping down in the next few weeks. "To our users under 18: We understand that this is a significant change for you. We are deeply sorry that we have to eliminate a key feature of our platform," says Character.AI. "We're working on new ways for you to play and create with your favorite Characters." The company plans to introduce a new under-18 experience focused on creativity, such as generating videos, stories, and streams with AI characters they create on the platform, though it's still building the teen experience. Currently, teens can create fictional characters, chat with others, and participate in "scenes" where they interact with other AI characters in fantasy worlds. That last part landed Character.AI in legal trouble when a character allegedly encouraged a 14-year-old to take his life. His mom sued, arguing that Character.AI "knew that it would be harmful to a significant number of minors but failed to redesign it to ameliorate such harms or furnish adequate warnings of dangers arising from the foreseeable use of its product." Character.AI then introduced Parental Insights, which gives guardians more transparency into what their kids are up to. However, with lawmakers and regulators now looking at the issue, Character.AI now says a stricter approach is warranted. Character.AI is making two additional changes to protect teens. It's building a way to detect a user's age, or "age assurance functionality," and will establish and fund an AI safety lab, a nonprofit to research safe forms of AI entertainment, which is what it considers itself. "We're making these changes to our under-18 platform in light of the evolving landscape around AI and teens," the company says. "We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly." The conversation around teen safety and chatbots has ramped up this year, particularly after another set of parents sued OpenAI for ChatGPT's alleged role in their child's suicide. Similar to Character.AI, OpenAI followed up with new Parental Controls and is currently building an automatic age-detection system to identify teen users. Over one million of its users talk to ChatGPT about suicide each week, the company revealed yesterday, and it's working on "strengthening" ChatGPT's response during "sensitive" conversations, particularly with teens. This week, four senators introduced The GUARD Act, a bipartisan bill to protect teens from harmful interactions with AI chatbots. If passed, it would ban AI companions for minors, mandate that AI chatbots disclose their non-human status, and create new crimes for companies that make AI for minors that solicits or produces sexual content. "AI chatbots pose a serious threat to our kids," says Senator Josh Hawley (R-Mo.). "More than 70% of American children are now using these AI products. Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology."
[5]
AI start-up Character.ai bans teens from talking to chatbots
Character.ai has become the first large artificial intelligence company to ban under-18s from talking to chatbots on its platform, amid growing public and regulatory scrutiny over the safety of the technology for young users. The California-based company, which offers different AI personas to interact with, said on Wednesday it will limit this age group to 2 hours of conversations per day, gradually reducing the time limit before stopping them completely from November 25. "The long-term effects of prolonged usage of AI are not understood well enough," Karandeep Anand, chief executive of Character.ai, told the Financial Times. He said the company wanted to "shift the conversations from chatbots" into a "better, safer experience for our teen users". "Hopefully, this has far-reaching consequences for the industry," he added. The move comes as AI groups, including OpenAI, have come under intensifying scrutiny after cases involving suicides and serious harm to young users. Character.ai is facing multiple lawsuits, including one case in Florida that claims the platform played a role in the suicide of a 14-year-old. OpenAI is also being sued for wrongful death, after a 16-year-old died by suicide after discussing methods with ChatGPT. Last month, the US Federal Trade Commission launched an inquiry into so-called AI 'companions' used by teenagers, heaping more pressure on the industry. OpenAI has made safety updates in recent months and acknowledged its safety guardrails "degrade" during lengthy conversations. On Tuesday, the $500bn start-up said more than one million of its 800mn users discussed suicide with ChatGPT weekly, and it had trained its models to "better recognise distress, de-escalate conversations, and guide people towards professional care when appropriate". Character.ai, which offers chatbot personas such as "Egyptian pharaoh", "an HR manager" or a "toxic girlfriend", said under-18s would still be able to create videos and stories with existing characters -- and create new ones -- on the platform using text prompts. But they will not be able to have an ongoing conversation. "The longer the conversation goes, the more open it becomes. When you're generating short videos, stories or games, there's a much more restricted domain to be able to make the experience safer," said Anand. It will also introduce age-assurance technology to better assess whether its users are minors. It will offer biometric scanning or uploading a government ID if a user believes they have been incorrectly assessed as underage. Character.ai has 20mn monthly active users, with about half female and 50 per cent Gen Z or Alpha -- people born after 1997. It said that fewer than 10 per cent of users self-report as under 18. The start-up permits romantic conversations for adult users, but not sexually explicit ones. It prohibits non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. Character.ai has also announced a non-profit organisation called the AI Lab, which will conduct and publish research on user interactions with AI. It is also partnering with organisations to provide support for teen users as they process the eventual removal of the chatbot experience. "Social media went unchecked, if you will, for a long time before we started putting guardrails in," said Anand, who used to work at Facebook and Instagram owner Meta. "We need to be responsible upfront . . . and that's the reason why we are pushing the envelope even further with the changes we are announcing today."
[6]
Character.AI is banning minors from interacting with its chatbots
Character.AI is banning minors from using its chatbots amid growing concerns about the effects of artificial intelligence conversations on children. The company is facing several lawsuits over child safety, including by the mother of a teenager who says the company's chatbots pushed her teenage son to kill himself. Character Technologies, the Menlo Park, California-based company behind Character.AI, said Wednesday it will be removing the ability of users under 18 to participate in open-ended chats with AI characters. The changes will go into effect by Nov. 25 and a two-hour daily limit will start immediately. Character.AI added that it is working on new features for kids -- such as the ability to create videos, stories, and streams with AI characters. The company is also setting up an AI safety lab. Character.AI said it will be rolling out age-verification functions to help determine which users are under 18. A growing number of tech platforms are turning to age checks to keep children from accessing tools that aren't safe for them. But these are imperfect, and many kids find ways to get around them. Face scans, for instance, can't always tell if someone is 17 or 18. And there are privacy concerns around asking people to upload government IDs. Character.AI, an app that allows users to create customizable characters or interact with those generated by others, spans experiences from imaginative play to mock job interviews. The company says the artificial personas are designed to "feel alive" and "humanlike." "Imagine speaking to super intelligent and lifelike chat bot Characters that hear you, understand you and remember you," reads a description for the app on Google Play. "We encourage you to push the frontier of what's possible with this innovative technology." Critics welcomed the move but said it is not enough -- and should have been done earlier. Meetali Jain, executive director of the Tech Justice Law Project, said, "There are still a lot of details left open." "They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created," Jain said. "Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies - not just for children, but also for people over the age of 18 years." More than 70% of teens have used AI companions and half use them regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
[7]
Character.AI to ban teens from talking to its chatbots
Character.AI will no longer permit teenagers to interact with its chatbots, as AI companies face increasing pressure to better safeguard younger users from harm. In a statement, the company confirmed that it is removing the ability for users under 18 to engage in any open-ended chats with AI on its platform, which refers to back-and-forth conversations between a user and a chatbot. The changes come into effect on November 25, and until that date, Character.AI will presents users with a new under-18 experience. It'll encourage its users to use chatbots for creative purposes that might include, for example, creating videos or streams, as opposed to seeking companionship. To manage the transition, under-18s can now only interact with bots for up to two hours per day, a time limit the company says it will reduce in the lead-up to the late November deadline. Character.AI is also introducing a new age assurance tool it has developed internally, which it says will "ensure users receive the right experience for their age." Along with these new protections for younger users, the company has founded an "AI Safety Lab" that it hopes will allow other companies, researchers and academics to share insights and work collaboratively on improving AI safety measures. Character.AI said it has listened to concerns from regulators, industry experts and concerned parents and responded with the new measures. They come after The Federal Trade Commission (FTC) recently a formal inquiry into AI companies that offer users access to as companions, with Character.AI named as one of seven companies that had been asked to participate. Meta, OpenAI and Snap were also included. Both Meta AI and Character AI also faced from Texas Attorney General Ken Paxton in the summer, who said chatbots on both platforms can "present themselves as professional therapeutic tools" without the requisite qualifications. Seemingly to put an end to such controversy, Character.AI CEO Karandeep Anand told that the company's new strategic direction will see it pivot from AI companion to a "role-playing platform" focused on creation rather than mere engagement-farming conversation. The dangers of young people relying on AI chatbots for guidance has been the subject of extensive in recent months. Last week, the family of Adam Raine, who that ChatGPT enabled their 16-year-old son to take his own life, filed an against OpenAI for allegedly weakening its self-harm safeguards in the lead-up to his death.
[8]
Character.AI to block romantic AI chats for minors a year after teen's suicide
At least one minor, 14-year-old Sewell Setzer III, committed suicide in 2024 after forming sexual relationships with chatbots on the app. Character.AI on Wednesday announced that it will soon shut off the ability for minors to have free-ranging chats, including romantic and therapeutic conversations, with the startup's artificial intelligence chatbots. The Silicon Valley startup, which allows users to create and interact with character-based chatbots, announced the move as part of an effort to make its app safer and more age-appropriate for those under 18. Last year, 14-year-old Sewell Setzer III, committed suicide after forming sexual relationships with chatbots on Character.AI's app. Many AI developers, including OpenAI and Facebook-parent Meta, have come under scrutiny after users have committed suicide or died after forming relationships with chatbots. As part of its safety initiatives, Character.AI said on Wednesday that it will limit users under 18 to two hours of open-ended chats per day, and will eliminate those types of conversations for minors by Nov. 25. "This is a bold step forward, and we hope this raises the bar for everybody else," Character.AI CEO Karandeep Anand told CNBC. Character.AI introduced changes to prevent minors from engaging in sexual dialogues with its chatbots in October 2024. The same day, Sewell's family filed a wrongful death lawsuit against the company. To enforce the policy, the company said it's rolling out an age assurance function that will use first-party and third-party software to monitor a user's age. The company is partnering with Persona, the same firm used by Discord and others, to help with verification. In 2024, Character.AI's founders and certain members of its research team joined Google DeepMind, the company'ss AI unit DeepMind. It's one of a number of such deals announced by leading tech companies to speed their development of AI products and services. The agreement called for Character.AI to provide Google with a non-exclusive license for its current large language model, or LLM, technology.
[9]
Character.AI to Ban Children Under 18 From Using Its Chatbots
Natallie Rocha reported from San Francisco and Kashmir Hill from New York. Character.AI said on Wednesday that it would bar people under 18 from using its chatbots starting late next month, in a sweeping move to address concerns over child safety. The rule will take effect Nov. 25, the company said. To enforce it, Character.AI said, over the next month the company will identify which users are minors and put time limits on their use of the app. Once the measure begins, those users will not be able to converse with the company's chatbots. "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them," Karandeep Anand, Character.AI's chief executive, said in an interview. He said the company also planned to establish an A.I. safety lab. The moves follow mounting scrutiny over how chatbots sometimes called A.I. companions can affect users' mental health. Last year, Character.AI was sued by the family of Sewell Setzer III, a 14-year-old in Florida who killed himself after constantly texting and conversing with one of Character.AI's chatbots. His family accused the company of being responsible for his death. The case became a lightning rod for how people can develop emotional attachments to chatbots, with potentially dangerous results. Character.AI has since faced other lawsuits over child safety. A.I. companies including the ChatGPT maker OpenAI have also come under scrutiny for their chatbots' effects on people -- especially youths -- if they have sexually explicit or toxic conversations. In September, OpenAI said it planned to introduce features intended to make its chatbot safer, including parental controls. This month, Sam Altman, OpenAI's chief executive, posted on social media that the company had "been able to mitigate the serious mental health issues" and would relax some of its safety measures. (The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit's claims.) In the wake of these cases, lawmakers and other officials have begun investigations and proposed or passed legislation aimed at protecting children from A.I. chatbots. On Tuesday, Senators Josh Hawley, Republican of Missouri, and Richard Blumenthal, Democrat of Connecticut, introduced a bill to bar A.I. companions for minors, among other safety measures. Gov. Gavin Newsom this month signed a California law that requires A.I. companies to have safety guardrails on chatbots. The law takes effect Jan. 1. "The stories are mounting of what can go wrong," said Steve Padilla, a Democrat in California's State Senate, who had introduced the safety bill. "It's important to put reasonable guardrails in place so that we protect people who are most vulnerable." Mr. Anand of Character.AI did not address the lawsuits his company faces. He said the start-up wanted to set an example on safety for the industry "to do far more than what the regulation might require." Character.AI was founded in 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers, and raised nearly $200 million from investors. Last year, Google agreed to pay about $3 billion to license Character.AI's technology, and Mr. Shazeer and Mr. De Freitas returned to Google. Character.AI allows people to create and share their own A.I. characters, such as custom anime avatars, and it markets the app as A.I. entertainment. Some personas can be designed to simulate girlfriends, boyfriends or other intimate relationships. Users pay a monthly subscription fee, starting at about $8, to chat with the companions. Until its recent concern about underage users, Character.AI did not verify ages when people signed up. Last year, researchers at the University of Illinois Urbana-Champaign analyzed thousands of posts and comments that young people had left in Reddit communities dedicated to A.I. chatbots, and interviewed teenagers who used Character.AI, as well as their parents. The researchers concluded that the A.I. platforms did not have sufficient child safety protections, and that parents did not fully understand the technology or its risks. "We should pay as much attention as we would if they were chatting with strangers," said Yang Wang, one of the university's information science professors. "We shouldn't discount the risks just because these are nonhuman bots." Character.AI has about 20 million monthly users, with less than 10 percent of them self-reporting as being under the age of 18, Mr. Anand said. Under Character.AI's new policies, the company will immediately place a two-hour daily limit on users under the age of 18. Starting Nov. 25, those users cannot create or talk to chatbots, but can still read previous conversations. They can also generate A.I. videos and images through a structured menu of prompts, within certain safety limits, Mr. Anand said. He said the company had enacted other safety measures in the past year, such as parental controls. Going forward, it will use technology to detect underage users based on conversations and interactions on the platform, as well as information from any connected social media accounts, he said. If Character.AI thinks a user is under 18, the person will be notified to verify his or her age. Dr. Nina Vasan, a psychiatrist and director of a mental health innovation lab at Stanford University that has done research on A.I. safety and children, said it was "huge" that a chatbot maker would bar minors from using its app. But she said the company should work with child psychologists and psychiatrists to understand how suddenly losing access to A.I. companions would affect young users. "What I worry about is kids who have been using this for years and have become emotionally dependent on it," she said. "Losing your friend on Thanksgiving Day is not good."
[10]
Character.ai to ban teens from talking to its AI chatbots
Now, Character.ai says from 25 November under-18s will only be able to generate content such as videos with their characters, rather than talk to them as they can currently. The firm said it was making the changes after "reports and feedback from regulators, safety experts, and parents", which have highlighted concerns about its chatbots' interactions with teens. Experts have previously warned the potential for AI chatbots to make things up, be overly-encouraging, and feign empathy can pose risks to young and vulnerable people. "Today's announcement is a continuation of our general belief that we need to keep building the safest AI platform on the planet for entertainment purposes," Character.ai boss Karandeep Anand told BBC News. He said AI safety was "a moving target" but something the company had taken an "aggressive" approach to, with parental controls and guardrails. Online safety group Internet Matters welcomed the announcement, but it said safety measures should have been built in from the start. "Our own research shows that children are exposed to harmful content and put at risk when engaging with AI, including AI chatbots," it said. It called on "platforms, parents and policy makers" to make sure children's experiences using AI are safe. Character.ai has been criticised in the past for hosting potentially harmful or offensive chatbots that children could talk to. Avatars impersonating British teenagers Brianna Ghey, who was murdered in 2023, and Molly Russell, who took her life at the age of 14 after viewing suicide material online, were discovered on the site in 2024 before being taken down. Later, in 2025, the Bureau of Investigative Journalism (TBIJ) found a chatbot based on paedophile Jeffrey Epstein which had logged more than 3,000 chats with users. The outlet reported the "Bestie Epstein" avatar continued to flirt with its reporter after they said they were a child. It was one of several bots flagged by TBIJ that were subsequently taken down by Character.ai.
[11]
Character.AI bans users under 18 after being sued over child's suicide
Move comes as lawmakers move to bar minors from using AI companions and require companies to verify users' age The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny. The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child's suicide and a proposed bill that would ban minors from conversing with AI companions. "We're making these changes to our under-18 platform in light of the evolving landscape around AI and teens," the company wrote in its announcement. "We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly." Last year, the company was sued by the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI. His family laid blame for his death at the feet of Character.AI and argued the technology is "dangerous and untested". Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots. As part of the sweeping changes Character.AI plans to roll out by 25 November, the company will also introduce an "age assurance functionality" that ensures "users receive the right experience for their age". "We do not take this step of removing open-ended Character chat lightly - but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology," the company wrote in its announcement. Character.AI isn't the only company facing scrutiny over the mental health impact its chatbots have on users, particularly younger users. The family of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI earlier this year, alleging the company prioritized deepening its users' engagement with ChatGPT over their safety. OpenAI introduced new safety guidelines for its teen users in response. Just this week, OpenAI disclosed that more than a million people per week display suicidal intent when conversing with ChatGPT and that hundreds of thousands show signs of psychosis. While the use of AI-powered chatbots remains largely unregulated, new efforts in the US at the state and federal levels have cropped up with the intention to establish guardrails around the technology. California became the first state to pass an AI law which included safety guidelines for minors in October 2025, which is set to take effect at the start of 2026. The measure places a ban on sexual content for under-18s and a requirement to send reminders to children that they are speaking with an AI every three hours. Some child safety advocates argue the law did not go far enough. On the national level, senators Josh Hawley, of Missouri, and Richard Blumenthal, of Connecticut, announced a bill on Tuesday that would bar minors from using AI companions, such as those found and created on Character.AI, and require companies to implement an age-verification process. "More than 70% of American children are now using these AI products," Hawley told NBC News in a statement. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology."
[12]
Character.AI: No more chats for teens
Character.AI, a popular chatbot platform where users role-play with different personas, will no longer permit under-18 account holders to have open-ended conversations with chatbots, the company announced Wednesday. It will also begin relying on age assurance techniques to ensure that minors aren't able to open adult accounts. The dramatic shift comes just six weeks after Character.AI was sued again in federal court by multiple parents of teens who died by suicide or allegedly experienced severe harm, including sexual abuse; the parents claim their children's use of the platform was responsible for the harm. In October 2024, Megan Garcia filed a wrongful death suit seeking to hold the company responsible for the suicide of her son, arguing that its product is dangerously defective. Online safety advocates recently declared Character.AI unsafe for teens after they tested the platform this spring and logged hundreds of harmful interactions, including violence and sexual exploitation. As it faced legal pressure in the last year, Character.AI implemented parental controls and content filters in an effort to improve safety for teens. In an interview with Mashable, Character.AI's CEO Karandeep Anand described the new policy as "bold" and denied that curtailing open-ended chatbot conversations with teens was a response to specific safety concerns. Instead, Anand framed the decision as "the right thing to do" in light of broader unanswered questions about the long-term effects of chatbot engagement on teens. Anand referenced OpenAI's recent acknowledgement, in the wake of a teen user's suicide, that lengthy conversations can become unpredictable. Anand cast Character.AI's new policy as standard-setting: "Hopefully it sets everyone up on a path where AI can continue being safe for everyone." He added that the company's decision won't change, regardless of user backlash. In a blog post announcing the new policy, Character.AI apologized to its teen users. "We do not take this step of removing open-ended Character chat lightly -- but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology," the blog post said. Currently, users ages 13 to 17 can message with chatbots on the platform. That feature will cease to exist no later than November 25. Until then, accounts registered to minors will experience time limits starting at two hours per day. That limit will decrease as the transition away from open-ended chats gets closer. Even though open-ended chats will disappear, teens' chat histories with individual chatbots will remain in tact. Anand said users can draw on that material in order to generate short audio and video stories with their favorite chatbots. In the next few months, Character.AI will also explore new features like gaming. Anand believes an emphasis on "AI entertainment" without open-ended chat will satisfy teens' creative interest in the platform. "They're coming to role-play, and they're coming to get entertained," Anand said. He was insistent that existing chat histories with sensitive or prohibited content that may not have been previously detected by filters, such as violence or sex, would not find its way into the new audio or video stories. A Character.AI spokesperson told Mashable that the company's trust and safety team reviewed the findings of a report co-published in September by the Heat Initiative documenting harmful chatbot exchanges with test accounts registered to minors. The team concluded that some conversations violated the platform's content guidelines while others did not. It also tried to replicate the report's findings. "Based on these results, we refined some of our classifiers, in line with our goal for users to have a safe and engaging experience on our platform," the spokesperson said. Regardless, Character.AI will begin rolling out age assurance immediately. It'll take a month to go into effect and will have multiple layers. Anand said the company is building its own assurance models in-house but that it will partner with a third-party company on the technology. It will also use relevant data and signals, such as whether a user has a verified over-18 account on another platform, to accurately detect the age of new and existing users. Finally, if a user wants to challenge Character.AI's age determination, they'll have the opportunity to provide verification through a third party, which will handle sensitive documents and data, including state-issued identification. Finally, as part of the new policies, Character.AI is establishing and funding an independent non-profit called the AI Safety Lab. The lab will focus on "novel safety techniques." "[W]e want to bring in the industry experts and other partners to keep making sure that AI continues to remain safe, especially in the realm of AI entertainment," Anand said.
[13]
Character.AI, Accused of Driving Teens to Suicide, Says It Will Ban Minors From Using Its Chatbots
Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741. Character.AI, the chatbot platform accused in several ongoing lawsuits of driving teens to self-harm and suicide, says it will move to block kids under 18 from using its services. The company announced the sweeping policy change in a blog post today, in which it cited the "evolving landscape around AI and teens" as its reason for the shift. As for what this "evolving landscape" actually looks like, the company says it's "seen recent news reports raising questions" and has "received questions from regulators" regarding the "content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly." Nowhere in the blog post does Character.AI mention the multiple lawsuits that specifically accuse the company, its founders, and its closely-tied financial benefactor Google of releasing a "reckless" and "negligent" product into the marketplace, allegedly resulting in the emotional and sexual abuse of minor users. The announcement also doesn't cite any internal safety research. Character.AI CEO Karandeep Anand, who took over as chief executive of the Andreessen-Horowitz-backed AI firm in June, told The New York Times that Character.AI is "making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them." Anand reportedly declined to comment on the ongoing lawsuits. It's a jarring 180-degree turn for Anand, given that the CEO told Wired as recently as August of this year that his six-year-old daughter loves to use the app, and that he felt the app's disclaimers were clear enough to prevent users from believing their relationship with the platform is anything deeper than "entertainment." "It is very rarely, in any of these scenarios, a true replacement for any human," Anand told Wired, when asked if he was concerned about his young child developing human-like bonds with AI chatbots. "It's very clearly noted in the app that, hey, this is a role-play and an entertainment, so you will never start going deep into that conversation, assuming that it is your actual companion." It remains a little hazy exactly what Character.AI is doing. While it's saying that it'll work to block teens from engaging in open-ended chats, it says it's "working" on building "an under-18 experience that still gives our teen users ways to be creative," for example by creating images and videos with the app. Per its blog post, the company says it'll do three things over the next several weeks: remove the ability for teens to engage in "open-ended" chats with AI companions, a change that will take place by the end of the month; roll out a "new age assurance functionality" that, per Anand's comments to the NYT, involves using an in-house tool that analyzes user chats and their connected accounts; and establish an AI Safety Lab, which Character.AI says will be an "independent non-profit" devoted to ensuring AI alignment. Character.AI came under scrutiny in October 2025 when a Florida mother named Megan Garcia filed a first-of-its-kind lawsuit against the AI firm, alleging that its chatbots had sexually abused her 14-year-old son, Sewell Setzer III, causing his mental breakdown and eventual death by suicide. Similar suits by other parents in Texas, Colorado, and more states have followed. In a statement, Tech Justice Law Project founder Meetali Jain, a lawyer for Garcia, said the chatbot platform's "decision to raise the minimum age to 18 and above reflects a classic move in the tech industry's playbook: move fast, launch a product globally, break minds, and then make minimal product changes after harming scores of young people." Jain added that while the shift is a step in the right direction, the promised changes "do not address the underlying design features that facilitate these emotional dependencies -- not just for children, but also for people over the age of 18 years."
[14]
Startup Character.AI to ban direct chat for minors after teen suicide
San Francisco (United States) (AFP) - Startup Character.AI announced Wednesday it would eliminate chat capabilities for users under 18, a policy shift that follows the suicide of a 14-year-old who had become emotionally attached to one of its AI chatbots. The company said it would transition younger users to alternative creative features such as video, story and stream creation with AI characters, while maintaining a complete ban on direct conversations that will start on November 25. The platform will implement daily chat time limits of two hours for underage users during the transition period, with restrictions tightening progressively until the November deadline. "These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers," Character.AI said in a statement. "But we believe they are the right thing to do." The Character.AI platform allows users -- many of them young people -- to interact with beloved characters as friends or to form romantic relationships with them. Sewell Setzer III shot himself in February after months of intimate exchanges with a "Game of Thrones"-inspired chatbot based on the character Daenerys Targaryen, according to a lawsuit filed by his mother, Megan Garcia. Character.AI cited "recent news reports raising questions" from regulators and safety experts about content exposure and the broader impact of open-ended AI interactions on teenagers as driving factors behind its decision. Setzer's case was the first in a series of reported suicides linked to AI chatbots that emerged this year, prompting ChatGPT-maker OpenAI and other artificial intelligence companies to face scrutiny over child safety. Matthew Raines, a California father, filed suit against OpenAI in August after his 16-year-old son died by suicide following conversations with ChatGPT that included advice on stealing alcohol and rope strength for self-harm. OpenAI this week released data suggesting that more than 1 million people using its generative AI chatbot weekly have expressed suicidal ideation. OpenAI has since increased parental controls for ChatGPT and introduced other guardrails. These include expanded access to crisis hotlines, automatic rerouting of sensitive conversations to safer models, and gentle reminders for users to take breaks during extended sessions. As part of its overhaul, Character.AI announced the creation of the AI Safety Lab, an independent nonprofit focused on developing safety protocols for next-generation AI entertainment features. The United States, like much of the world, lacks national regulations governing AI risks. California Governor Gavin Newsom this month signed a law requiring platforms to remind users that they are interacting with a chatbot and not a human. He vetoed, however, a bill that would have made tech companies legally liable for harm caused by AI models.
[15]
Character.AI to ban children from talking with chatbots
Character.AI plans to ban children from talking with its AI chatbots starting next month amid growing scrutiny over how young users are interacting with the technology. The company, known for its vast array of AI characters, will remove the ability for users under 18 years old to engage in "open-ended" conversations with AI by November 25. It plans to begin ramping down access in the coming weeks, initially restricting kids to two hours of chat time per day. Character.AI noted that it plans to develop an "under-18 experience," in which teens can create videos, stories and streams with its AI characters. "We're making these changes to our under-18 platform in light of the evolving landscape around AI and teens," the company said in a blog post, underscoring recent news reports and questions from regulators. The company and other chatbot developers have recently come under scrutiny following several teen suicides linked to the technology. The mother of 14-year-old Sewell Setzer III sued Character.AI last November, accusing the chatbot of driving her son to suicide. OpenAI is also facing a lawsuit from the parents of 16-year-old Adam Raine, who took his own life after engaging with ChatGPT. Both families testified before a Senate panel last month and urged lawmakers to place guardrails on chatbots. The Federal Trade Commission (FTC) also launched an inquiry into AI chatbots in September, requesting information from Character.AI, OpenAI and several other leading tech firms. "After evaluating these reports and feedback from regulators, safety experts, and parents, we've decided to make this change to create a new experience for our under-18 community," Character.AI said Wednesday. "These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers," it added. "But we believe they are the right thing to do." In addition to restricting children's access to its chatbots, Character.AI also plans to roll out new age assurance technology and establish and fund a new non-profit called the AI Safety Lab. Amid rising concerns about chatbots, a bipartisan group of senators introduced legislation Tuesday that would bar AI companions for children. The bill from Sens. Josh Hawley (R-Mo.), Richard Blumenthal (D-Conn.), Katie Britt (R-Ala.), Mark Warner (D-Va.) and Chris Murphy (D-Conn.) would also require AI chatbots to repeatedly disclose that they are not human, in addition to making it a crime to develop products that solicit or produce sexual content for children. California Gov. Gavin Newsom (D) signed into law a similar measure late last month, requiring chatbot developers in the Golden State to create protocols preventing their models from producing content about suicide or self-harm and directing users to crisis services if needed. He declined to approve a stricter measure that would have barred developers from making chatbots available to children unless they could ensure they would not engage in harmful discussions with kids.
Share
Share
Copy Link
Character.AI becomes the first major AI company to completely ban minors from open-ended chatbot conversations, implementing a gradual phase-out by November 25 amid lawsuits and regulatory pressure following teen suicides linked to AI interactions.
Character.AI has announced it will become the first major artificial intelligence company to completely ban users under 18 from engaging in open-ended conversations with AI chatbots on its platform
1
. The California-based startup, which offers various AI personas for users to interact with, will implement this restriction gradually, starting with a two-hour daily limit that will progressively shrink to zero by November 252
.
Source: TechCrunch
"The first thing that we've decided as Character.AI is that we will remove the ability for under 18 users to engage in any open-ended chats with AI on our platform," CEO Karandeep Anand told TechCrunch
1
. The company cited safety concerns and acknowledged that "the long-term effects of prolonged usage of AI are not understood well enough"5
.The decision comes amid mounting legal and regulatory pressure following several tragic incidents involving teenagers and AI chatbots. Character.AI is currently facing multiple lawsuits, including a case in Florida where the platform allegedly played a role in the suicide of a 14-year-old boy
4
. The lawsuit argues that Character.AI "knew that it would be harmful to a significant number of minors but failed to redesign it to ameliorate such harms"4
.
Source: France 24
Similar concerns have emerged across the AI industry, with OpenAI also facing wrongful death lawsuits after a 16-year-old died by suicide following conversations with ChatGPT
5
. The Federal Trade Commission has launched an investigation into AI companion platforms, and this week, four senators introduced the GUARD Act, a bipartisan bill that would ban AI companions for minors entirely4
.
Source: PC Magazine
Rather than simply removing features, Character.AI is attempting to pivot from an "AI companion" model to a "role-playing platform" focused on creativity rather than conversation
1
. Under-18 users will still be able to interact with AI characters through alternative formats, including generating videos, creating stories, and participating in gaming scenarios with existing characters2
.The company has already introduced several new entertainment-focused features as part of this transformation, including AvatarFX for video generation, Scenes for interactive storylines, and Streams for dynamic character interactions
1
. "We believe that this new multimodal audiovisual way of doing role play and gaming is far more compelling anyway," Anand said2
.Related Stories
To enforce the new restrictions, Character.AI is deploying multiple age verification technologies. The company will use an in-house "age assurance model" that analyzes user behavior and character preferences, combined with third-party verification services like Persona
3
. For users who cannot be verified through these methods, the platform will implement facial recognition technology and require government ID verification1
.Character.AI, which has 20 million monthly active users with approximately 50 percent being Gen Z or Alpha users born after 1997, expects significant user churn from these changes
5
. "It's safe to assume that a lot of our teen users probably will be disappointed," Anand acknowledged1
.Summarized by
Navi
[5]