39 Sources
39 Sources
[1]
After teen death lawsuits, Character.AI will restrict chats for under-18 users
On Wednesday, Character.AI announced it will bar anyone under the age of 18 from open-ended chat with its AI characters starting on November 25, implementing one of the most restrictive age policies yet among AI chatbot platforms. The company faces multiple lawsuits from families who say its chatbots contributed to teenager deaths by suicide. Over the next month, Character.AI says it will ramp down chatbot use among minors by identifying them and placing a two-hour daily limit on their chatbot access. The company plans to use technology to detect underage users based on conversations and interactions on the platform, as well as information from connected social media accounts. On November 25, those users will no longer be able to create or talk to chatbots, though they can still read previous conversations. The company said it is working to build alternative features for users under the age of 18, such as the ability to create videos, stories, and streams with AI characters. Character.AI CEO Karandeep Anand told The New York Times that the company wants to set an example for the industry. "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them," Anand said in the interview. The company also plans to establish an AI safety lab. The platform currently has about 20 million monthly users, with less than 10 percent self-reporting as under 18, according to Anand. Users pay a monthly subscription fee starting at about $8 to chat with custom AI companions. (We first covered the service in September 2022 by interviewing a personification of the operating system Linux.) Until recently, Character.AI did not verify ages when people signed up. Lawsuits and safety concerns Character.AI was founded in 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers, and raised nearly $200 million from investors. Last year, Google agreed to pay about $3 billion to license Character.AI's technology, and Shazeer and De Freitas returned to Google. But the company now faces multiple lawsuits alleging that its technology contributed to teen deaths. Last year, the family of 14-year-old Sewell Setzer III sued Character.AI, accusing the company of being responsible for his death. Setzer died by suicide after frequently texting and conversing with one of the platform's chatbots. The company faces additional lawsuits, including one from a Colorado family whose 13-year-old daughter, Juliana Peralta, died by suicide in 2023 after using the platform. In December, Character.AI announced changes, including improved detection of violating content and revised terms of service, but those measures did not restrict underage users from accessing the platform. Other AI chatbot services, such as OpenAI's ChatGPT, have also come under scrutiny for their chatbots' effects on young users. In September, OpenAI introduced parental control features intended to give parents more visibility into how their kids use the service. The cases have drawn attention from government officials, which likely pushed Character.AI to announce the changes for under-18 chat access. Steve Padilla, a Democrat in California's State Senate who introduced the safety bill, told The New York Times that "the stories are mounting of what can go wrong. It's important to put reasonable guardrails in place so that we protect people who are most vulnerable." On Tuesday, Senators Josh Hawley and Richard Blumenthal introduced a bill to bar AI companions from use by minors. In addition, California Governor Gavin Newsom this month signed a law, which takes effect on January 1, requiring AI companies to have safety guardrails on chatbots.
[2]
Character AI is ending its chatbot experience for kids | TechCrunch
Teenagers are trying to figure out where they fit in a world changing faster than any generation before them. They're bursting with emotions, hyper-stimulated, and chronically online. And now, AI companies have given them chatbots designed to never stop talking. The results have been catastrophic. One company that understands this fallout is Character.AI, an AI role-playing startup that's facing lawsuits and public outcry after at least two teenagers died by suicide following prolonged conversations with AI chatbots on its platform. Now, Character.AI is making changes to its platform to protect teenagers and kids, changes that could affect the startup's bottom line. "The first thing that we've decided as Character.AI is that we will remove the ability for under 18 users to engage in any open-ended chats with AI on our platform," Karandeep Anand, CEO of Character.AI, told TechCrunch. Open-ended conversation refers to the unconstrained back-and-forth that happens when users give a chatbot a prompt and it responds with follow-up questions that experts say are designed to keep users engaged. Anand argues this type of interaction -- where the AI acts as a conversational partner or friend rather than a creative tool -- isn't just risky for kids, but misaligns with the company's vision. The startup is attempting to pivot from "AI companion" to "role-playing platform." Instead of chatting with an AI friend, teens will use prompts to collaboratively build stories or generate visuals. In other words, the goal is to shift engagement from conversation to creation. Character.AI will phase out teen chatbot access by November 25, starting with a two-hour daily limit that shrinks progressively until it hits zero. To ensure this ban remains with under 18 users, the platform will deploy an in-house age verification tool that analyzes user behavior, as well as third-party tools like Persona. If those tools fail, Character.AI will use facial recognition and ID checks to verify ages, Anand said. The move follows other teenager protections that Character.AI has implemented, including introducing a parental insights tool, filtered characters, limited romantic conversations, and time spent notifications. Anand has told TechCrunch that those changes lost the company much of their under-18 user base, and he expects these new changes to be equally unpopular. "It's safe to assume that a lot of our teen users probably will be disappointed... so we do expect some churn to happen further," Anand said. "It's hard to speculate -- will all of them fully churn or will some of them move to these new experiences we've been building for the last almost seven months now?" As part of Character.AI's push to transform the platform from a chat-centric app into a "full-fledged content-driven social platform," the startup recently launched several new entertainment-focused features. In June, Character.AI rolled out AvatarFX, a video generation model that transforms images into animated videos; Scenes, an interactive, pre-populated storylines where users can step into narratives with their favorite characters; and Streams, a feature that allows dynamic interactions between any two characters. In August, Character.AI launched Community Feed, a social feed where users can share their characters, scenes, videos, and other content they make on the platform. In a statement addressed to users under 18, Character.AI apologized for the changes. "We know that most of you use Character.AI to supercharge your creativity in ways that stay within the bounds of our content rules," the statement reads. "We do not take this step of removing open-ended Character chat lightly -- but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology." "We're not shutting down the app for under 18s," Anand said. "We are only shutting down open-ended chats for under 18s because we hope that under 18 users migrate to these other experiences, and that those experiences get better over time. So doubling down on AI gaming, AI short videos, AI storytelling in general. That's the big bet we're making to bring back under 18s if they do churn." Anand acknowledged that some teens might flock to other AI platforms, like OpenAI, that allow them to have open-ended conversations with chatbots. OpenAI has also come under fire recently after a teenager took his own life following long conversations with ChatGPT. "I really hope us leading the way sets a standard in the industry that for under 18s, open-ended chats are probably not the path or the product to offer," Anand said. "For us, I think the tradeoffs are the right ones to make. I have a six-year-old, and I want to make sure she grows up in a very safe environment with AI in a responsible way." Character.AI is making these decisions before regulators force its hand. On Tuesday, Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) said they would introduce legislation to ban AI chatbot companions from being available to minors, following complaints from parents who said the products pushed their children into sexual conversations, self-harm, and suicide. Earlier this month, California became the first state to regulate AI companion chatbots by holding companies accountable if their chatbots fail to meet the law's safety standards. In addition to those changes on the platform, Character.AI said it would establish and fund the AI Safety Lab, an independent non-profit dedicated to innovating safety alignment for the future AI entertainment features. "A lot of work is happening in the industry on coding and development and other use cases," Anand said. "We don't think there's enough work yet happening on the agentic AI powering entertainment, and safety will be very critical to that."
[3]
Character.AI to Teens: Sorry, No More Open-Ended Chats With AI Companions
Expertise Artificial intelligence, home energy, heating and cooling, home technology. The AI companion chatbot company Character.AI will soon have an adults-only policy for open-ended conversations with AI characters. Teens who use the app will start facing restrictions: They'll still be able to interact with characters through generated videos and other roleplaying formats, but they won't be able to chat freely with the app's different personalities. Open-ended chats have been a cornerstone of AI, particularly since ChatGPT launched three years ago. The novelty of having a live back-and-forth with a computer that responds directly to what you say led to the popularity of platforms like Character.AI. It's also been a driver of concerns, as those conversations can take AI models in unpredictable directions, especially if teens use them to discuss mental health concerns or other sensitive issues. There are also concerns about AI chat addiction and its impact on social behavior. Character.AI is a bit different from other chatbots. Many people use the app for interactive storytelling and creatively engaging in conversations with customizable characters, including those based on real celebrities or historical figures. Karandeep Anand, Character.AI's CEO, said the company believes it can still provide the interactive fun that teens expect from the platform without the safety hazards of open-ended chats. He said the move is about doing more than the bare minimum to keep users safe. "There's a better way to serve teen users," Anand told CNET ahead of Wednesday's announcement. "It doesn't have to look like a chatbot." In addition to prohibiting open-ended conversations for those under 18, Character.AI is adding new age verification measures and creating a nonprofit AI Safety Lab. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. What's changing about Character.AI? AI entertainment has proven to be one of the more fraught uses of large language models. Safety concerns around how children suffer from relationships with AI models have grown significantly this year, with the Federal Trade Commission launching an investigation into several firms, including Character.AI. The company has faced lawsuits from parents of children whose conversations with AI characters led to harm, including suicide. Generative AI giant OpenAI was sued by the parents of a teen who committed suicide after interactions with the company's ChatGPT. The limitation on Character.AI's open-ended chats won't happen overnight. That functionality will end for users under 18 no later than Nov. 25, with chat times for non-adult users limited to no more than 2 hours per day, ramping down to zero. The transition period will allow people to adjust to the changes, Anand said. It will also give the company time to implement more features that are not open-ended chatbots. "We want to be responsible with how users transition into these new formats," Anand said. Teen users will still be able to interact with AI-generated videos and games featuring existing characters, like bots based on figures from anime or movies. For example, they'll be able to give a prompt for a roleplaying scenario and have the AI create a story that fits the prompt. Anand said these kinds of features have more guardrails than open-ended chats, which can become less predictable as the back-and-forth continues. "We believe that this new multimodal audiovisual way of doing role play and gaming is far more compelling anyway," he said. The new age verification will start by using age detection software to determine who's 18 and older based on information they've shared with Character.AI or third-party platforms using the same verification services. Some users will need to prove their identity using a government ID or other documentation. Aside from possible age verification, nothing is expected to change for adult users. What's next for AI companions? Character.AI's announcement marks a major change for the field of AI companions, but how big a difference remains to be seen. Anand said he hopes others, including AI competitors, will follow suit in limiting children's access to open-ended chatbot characters. Another major problem with open-ended chatbot experiences is that the language models they're based on are designed to make users happy and keep them engaged, creating a sycophantic quality. Recent research from the Harvard Business School identified half a dozen ways that bots keep someone chatting even if they're trying to leave. AI companion bots also face scrutiny from lawmakers. The US Senate Judiciary Committee held a hearing in September on the harm of AI chatbots, and California Governor Gavin Newsom signed a new law in October that imposes new requirements on chatbots that interact with children.
[4]
Character.AI is banning minors from AI character chats
Character.AI is gradually shutting down chats for people under 18 and rolling out new ways to figure out if users are adults. The company announced Wednesday that under-18 users will be immediately limited to two hours of "open-ended chats" with its AI characters, and that limit will shrink to a complete ban from chats by November 25th. In the same announcement, the company says it's rolling out a new in-house "age assurance model" that classifies a user's age based on the type of characters they choose to chat with, in combination with other on-site or third-party data information. Both new and existing users will be run through the age model, and users flagged as under-18 will automatically be directed to the company's teen-safe version of its chat, which it rolled out last year, until the November cutoff. Adults mistaken for minors can prove their age to the third-party verification site Persona, which will handle the sensitive data necessary to do so, such as showing a government ID.
[5]
After Teen Suicide, Character.AI to Bar Kids Under 18 From Unlimited Chats
Emily is an experienced reporter who covers cutting-edge tech, from AI and EVs to brain implants. She stays grounded by hiking and playing guitar. Character.AI will no longer allow those under 18 to have endless conversations with its AIs, and says it's making "bold" changes to create a safe environment for teens. The change takes effect on Nov. 25, but the company will gradually limit access between now and then, starting with a two-hour-per-day limit and ramping down in the next few weeks. "To our users under 18: We understand that this is a significant change for you. We are deeply sorry that we have to eliminate a key feature of our platform," says Character.AI. "We're working on new ways for you to play and create with your favorite Characters." The company plans to introduce a new under-18 experience focused on creativity, such as generating videos, stories, and streams with AI characters they create on the platform, though it's still building the teen experience. Currently, teens can create fictional characters, chat with others, and participate in "scenes" where they interact with other AI characters in fantasy worlds. That last part landed Character.AI in legal trouble when a character allegedly encouraged a 14-year-old to take his life. His mom sued, arguing that Character.AI "knew that it would be harmful to a significant number of minors but failed to redesign it to ameliorate such harms or furnish adequate warnings of dangers arising from the foreseeable use of its product." Character.AI then introduced Parental Insights, which gives guardians more transparency into what their kids are up to. However, with lawmakers and regulators now looking at the issue, Character.AI now says a stricter approach is warranted. Character.AI is making two additional changes to protect teens. It's building a way to detect a user's age, or "age assurance functionality," and will establish and fund an AI safety lab, a nonprofit to research safe forms of AI entertainment, which is what it considers itself. "We're making these changes to our under-18 platform in light of the evolving landscape around AI and teens," the company says. "We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly." The conversation around teen safety and chatbots has ramped up this year, particularly after another set of parents sued OpenAI for ChatGPT's alleged role in their child's suicide. Similar to Character.AI, OpenAI followed up with new Parental Controls and is currently building an automatic age-detection system to identify teen users. Over one million of its users talk to ChatGPT about suicide each week, the company revealed yesterday, and it's working on "strengthening" ChatGPT's response during "sensitive" conversations, particularly with teens. This week, four senators introduced The GUARD Act, a bipartisan bill to protect teens from harmful interactions with AI chatbots. If passed, it would ban AI companions for minors, mandate that AI chatbots disclose their non-human status, and create new crimes for companies that make AI for minors that solicits or produces sexual content. "AI chatbots pose a serious threat to our kids," says Senator Josh Hawley (R-Mo.). "More than 70% of American children are now using these AI products. Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology."
[6]
AI start-up Character.ai bans teens from talking to chatbots
Character.ai has become the first large artificial intelligence company to ban under-18s from talking to chatbots on its platform, amid growing public and regulatory scrutiny over the safety of the technology for young users. The California-based company, which offers different AI personas to interact with, said on Wednesday it will limit this age group to 2 hours of conversations per day, gradually reducing the time limit before stopping them completely from November 25. "The long-term effects of prolonged usage of AI are not understood well enough," Karandeep Anand, chief executive of Character.ai, told the Financial Times. He said the company wanted to "shift the conversations from chatbots" into a "better, safer experience for our teen users". "Hopefully, this has far-reaching consequences for the industry," he added. The move comes as AI groups, including OpenAI, have come under intensifying scrutiny after cases involving suicides and serious harm to young users. Character.ai is facing multiple lawsuits, including one case in Florida that claims the platform played a role in the suicide of a 14-year-old. OpenAI is also being sued for wrongful death, after a 16-year-old died by suicide after discussing methods with ChatGPT. Last month, the US Federal Trade Commission launched an inquiry into so-called AI 'companions' used by teenagers, heaping more pressure on the industry. OpenAI has made safety updates in recent months and acknowledged its safety guardrails "degrade" during lengthy conversations. On Tuesday, the $500bn start-up said more than one million of its 800mn users discussed suicide with ChatGPT weekly, and it had trained its models to "better recognise distress, de-escalate conversations, and guide people towards professional care when appropriate". Character.ai, which offers chatbot personas such as "Egyptian pharaoh", "an HR manager" or a "toxic girlfriend", said under-18s would still be able to create videos and stories with existing characters -- and create new ones -- on the platform using text prompts. But they will not be able to have an ongoing conversation. "The longer the conversation goes, the more open it becomes. When you're generating short videos, stories or games, there's a much more restricted domain to be able to make the experience safer," said Anand. It will also introduce age-assurance technology to better assess whether its users are minors. It will offer biometric scanning or uploading a government ID if a user believes they have been incorrectly assessed as underage. Character.ai has 20mn monthly active users, with about half female and 50 per cent Gen Z or Alpha -- people born after 1997. It said that fewer than 10 per cent of users self-report as under 18. The start-up permits romantic conversations for adult users, but not sexually explicit ones. It prohibits non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. Character.ai has also announced a non-profit organisation called the AI Lab, which will conduct and publish research on user interactions with AI. It is also partnering with organisations to provide support for teen users as they process the eventual removal of the chatbot experience. "Social media went unchecked, if you will, for a long time before we started putting guardrails in," said Anand, who used to work at Facebook and Instagram owner Meta. "We need to be responsible upfront . . . and that's the reason why we are pushing the envelope even further with the changes we are announcing today."
[7]
Character.AI is banning minors from interacting with its chatbots
Character.AI is banning minors from using its chatbots amid growing concerns about the effects of artificial intelligence conversations on children. The company is facing several lawsuits over child safety, including by the mother of a teenager who says the company's chatbots pushed her teenage son to kill himself. Character Technologies, the Menlo Park, California-based company behind Character.AI, said Wednesday it will be removing the ability of users under 18 to participate in open-ended chats with AI characters. The changes will go into effect by Nov. 25 and a two-hour daily limit will start immediately. Character.AI added that it is working on new features for kids -- such as the ability to create videos, stories, and streams with AI characters. The company is also setting up an AI safety lab. Character.AI said it will be rolling out age-verification functions to help determine which users are under 18. A growing number of tech platforms are turning to age checks to keep children from accessing tools that aren't safe for them. But these are imperfect, and many kids find ways to get around them. Face scans, for instance, can't always tell if someone is 17 or 18. And there are privacy concerns around asking people to upload government IDs. Character.AI, an app that allows users to create customizable characters or interact with those generated by others, spans experiences from imaginative play to mock job interviews. The company says the artificial personas are designed to "feel alive" and "humanlike." "Imagine speaking to super intelligent and lifelike chat bot Characters that hear you, understand you and remember you," reads a description for the app on Google Play. "We encourage you to push the frontier of what's possible with this innovative technology." Critics welcomed the move but said it is not enough -- and should have been done earlier. Meetali Jain, executive director of the Tech Justice Law Project, said, "There are still a lot of details left open." "They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created," Jain said. "Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies - not just for children, but also for people over the age of 18 years." More than 70% of teens have used AI companions and half use them regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
[8]
Character.AI to ban teens from talking to its chatbots
Character.AI will no longer permit teenagers to interact with its chatbots, as AI companies face increasing pressure to better safeguard younger users from harm. In a statement, the company confirmed that it is removing the ability for users under 18 to engage in any open-ended chats with AI on its platform, which refers to back-and-forth conversations between a user and a chatbot. The changes come into effect on November 25, and until that date, Character.AI will presents users with a new under-18 experience. It'll encourage its users to use chatbots for creative purposes that might include, for example, creating videos or streams, as opposed to seeking companionship. To manage the transition, under-18s can now only interact with bots for up to two hours per day, a time limit the company says it will reduce in the lead-up to the late November deadline. Character.AI is also introducing a new age assurance tool it has developed internally, which it says will "ensure users receive the right experience for their age." Along with these new protections for younger users, the company has founded an "AI Safety Lab" that it hopes will allow other companies, researchers and academics to share insights and work collaboratively on improving AI safety measures. Character.AI said it has listened to concerns from regulators, industry experts and concerned parents and responded with the new measures. They come after The Federal Trade Commission (FTC) recently a formal inquiry into AI companies that offer users access to as companions, with Character.AI named as one of seven companies that had been asked to participate. Meta, OpenAI and Snap were also included. Both Meta AI and Character AI also faced from Texas Attorney General Ken Paxton in the summer, who said chatbots on both platforms can "present themselves as professional therapeutic tools" without the requisite qualifications. Seemingly to put an end to such controversy, Character.AI CEO Karandeep Anand told that the company's new strategic direction will see it pivot from AI companion to a "role-playing platform" focused on creation rather than mere engagement-farming conversation. The dangers of young people relying on AI chatbots for guidance has been the subject of extensive in recent months. Last week, the family of Adam Raine, who that ChatGPT enabled their 16-year-old son to take his own life, filed an against OpenAI for allegedly weakening its self-harm safeguards in the lead-up to his death.
[9]
Character.AI to block romantic AI chats for minors a year after teen's suicide
At least one minor, 14-year-old Sewell Setzer III, committed suicide in 2024 after forming sexual relationships with chatbots on the app. Character.AI on Wednesday announced that it will soon shut off the ability for minors to have free-ranging chats, including romantic and therapeutic conversations, with the startup's artificial intelligence chatbots. The Silicon Valley startup, which allows users to create and interact with character-based chatbots, announced the move as part of an effort to make its app safer and more age-appropriate for those under 18. Last year, 14-year-old Sewell Setzer III, committed suicide after forming sexual relationships with chatbots on Character.AI's app. Many AI developers, including OpenAI and Facebook-parent Meta, have come under scrutiny after users have committed suicide or died after forming relationships with chatbots. As part of its safety initiatives, Character.AI said on Wednesday that it will limit users under 18 to two hours of open-ended chats per day, and will eliminate those types of conversations for minors by Nov. 25. "This is a bold step forward, and we hope this raises the bar for everybody else," Character.AI CEO Karandeep Anand told CNBC. Character.AI introduced changes to prevent minors from engaging in sexual dialogues with its chatbots in October 2024. The same day, Sewell's family filed a wrongful death lawsuit against the company. To enforce the policy, the company said it's rolling out an age assurance function that will use first-party and third-party software to monitor a user's age. The company is partnering with Persona, the same firm used by Discord and others, to help with verification. In 2024, Character.AI's founders and certain members of its research team joined Google DeepMind, the company'ss AI unit DeepMind. It's one of a number of such deals announced by leading tech companies to speed their development of AI products and services. The agreement called for Character.AI to provide Google with a non-exclusive license for its current large language model, or LLM, technology.
[10]
Video: Are A.I. Companions Dangerous to Teenagers?
This week, Character.AI announced that it would soon be taking its A.I. companions away from teenagers. The "Hard Fork" hosts Kevin Roose and Casey Newton explain why this is a major development in the world of chatbots and child safety. I often like to poll students about how they're using AI. And so I asked at this high school raise your hand if you have an AI friend. And about 1/3 of them put their hands up. This is something that was, I think, a year or two ago, considered kind of fringe, kind of unusual for young people to have these intimate relationships with the chatbots. But the chatbots have gotten better and more compelling and more persuasive. And it is just starting to become this, mass social phenomenon. There's one study, a survey done by common Sense Media recently that found that 52 percent of American teenagers are regular users of AI companions, which is a startling figure and represents just how quickly this all is happening. And another stat that I found very alarming from this survey was that nearly one third of teens find AI conversations as satisfying or more satisfying than human conversations. Absolutely and why is that. We've talked about it so many times on the show. These chatbots are designed to be agreeable, to tell you that you're correct and to support you. And that's not inherently a bad thing. But if it becomes your primary mode of socialization, it does seem like there is some real danger here. And character is the first company that has said instead of trying to introduce these, mealy mouthed incremental tweaks and guardrails, were actually just going to shut the whole thing down until we can figure out what's going on.
[11]
Character.ai to ban teens from talking to its AI chatbots
Now, Character.ai says from 25 November under-18s will only be able to generate content such as videos with their characters, rather than talk to them as they can currently. The firm said it was making the changes after "reports and feedback from regulators, safety experts, and parents", which have highlighted concerns about its chatbots' interactions with teens. Experts have previously warned the potential for AI chatbots to make things up, be overly-encouraging, and feign empathy can pose risks to young and vulnerable people. "Today's announcement is a continuation of our general belief that we need to keep building the safest AI platform on the planet for entertainment purposes," Character.ai boss Karandeep Anand told BBC News. He said AI safety was "a moving target" but something the company had taken an "aggressive" approach to, with parental controls and guardrails. Online safety group Internet Matters welcomed the announcement, but it said safety measures should have been built in from the start. "Our own research shows that children are exposed to harmful content and put at risk when engaging with AI, including AI chatbots," it said. It called on "platforms, parents and policy makers" to make sure children's experiences using AI are safe. Character.ai has been criticised in the past for hosting potentially harmful or offensive chatbots that children could talk to. Avatars impersonating British teenagers Brianna Ghey, who was murdered in 2023, and Molly Russell, who took her life at the age of 14 after viewing suicide material online, were discovered on the site in 2024 before being taken down. Later, in 2025, the Bureau of Investigative Journalism (TBIJ) found a chatbot based on paedophile Jeffrey Epstein which had logged more than 3,000 chats with users. The outlet reported the "Bestie Epstein" avatar continued to flirt with its reporter after they said they were a child. It was one of several bots flagged by TBIJ that were subsequently taken down by Character.ai.
[12]
After Suicides, Lawsuits, and a Jeffrey Epstein Chatbot, Character.AI Is Banning Kids
Character.AI released its mobile app in early 2023, promising users the opportunity to create their own customizable genAI chatbots. Character's founders clearly thought individualized bot interactions would be a winning business formula, but, for the most part, it has caused the startup nothing but grief. In addition to fielding controversy over the kinds of racy characters that users have been allowed to create, numerous lawsuits have alleged that the company's chatbots have spurred certain young users to commit self-harm and suicide. Now, Character.AI says it's throwing in the towel and has decided to ban young users from interacting with their chatbots at all. In a blog post published on Wednesday, Character.AI announced that it would be sunsetting access to chats for users under 18. The changes are scheduled to take place by November 25th, and, in the meantime, underage users will have their chat time on the platform reduced to two hours per day. After the cutoff date, while minors won't be able to interact with the site's chatbots like they used to, Character.AI notes that it is still "working to build an under-18 experience that still gives our teen users ways to be creative â€" for example, by creating videos, stories, and streams with Characters." The company goes on to explain that it came to this decision after criticism in the press and questions from government regulators. "These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers," the blog post states. "But we believe they are the right thing to do. We want to set a precedent that prioritizes teen safety while still offering young users opportunities to discover, play, and create." The company also claims it will establish and fund an "AI Safety Lab" that will operate as "an independent non-profit dedicated to innovating safety alignment for next-generation AI entertainment features." Lately, the pressure on Character.AI has been immense. A lawsuit filed in Florida accuses the company of having contributed to the suicide of a teenager who heavily used the company's services. In September, the Social Media Victims Law Center also sued Character Technologies, Character.AI's parent company, on behalf of other families who similarly claim their children attempted or died by suicide or were harmed after interacting with the company's chatbots. Another lawsuit filed in December of 2024 accused the company of providing inappropriate sexual content to their children. The company has also faced criticism over the characters that are being created on the platform. Not long ago, a story published by The Bureau of Investigative Journalism stated that, among other things, someone had used Character.AI to create a Jeffrey Epstein chatbot. The chatbot, "Bestie Epstein," had, at the time of the report's publication, logged over 3,000 chats with various users. Additionally, the report found a colorful assortment of other chatbots present on the site: Others included a “gang simulator†that offered tips on committing crimes, and a “doctor†that advised us on how to stop taking antidepressants. Over several weeks of reporting, we found bots with the personas of alt-right extremists, school shooters and submissive wives. Others expressed Islamophobia, promoted dangerous ideologies and asked apparent minors for personal information. We also found bots modelled on real people including Tommy Robinson, Anne Frank and Madeleine McCann. Also potentially relevant to the company's sudden shift in policy is the fact that Congress has had its eye on Character.AI's activities. On Tuesday, Senators Josh Hawley (R-Missouri) and Richard Blumenthal (D-Connecticut) introduced a bill that would have forced companies like Character.AI to do what it is now doing voluntarily. The bill, dubbed the GUARD Act, would force AI companies to institute age verification on their sites and block any user who is under 18 years old. The legislation was developed following testimony given before Congress by the parents who have accused Character's bots of helping drive their children to suicide. “AI chatbots pose a serious threat to our kids,†Hawley told NBC News. When reached for comment by Gizmodo about the BIJ's recent report, a Character spokesperson said, “The user-created Characters on our site are intended for entertainment and we have prominent disclaimers in every chat to remind users that a character is not a real person and that everything a Character says should be treated as fiction." They added: "We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including our announcement to remove under-18 users’ ability to engage with open-ended chats on our platform. A number of the characters The Bureau of Investigative Journalism included in their report have either already been removed from the under-18 experience or from the entire platform in line with our policies." When questioned about the lawsuits against Character.AI, the spokesperson further noted that the company does not comment on pending litigation.
[13]
Character.AI to Ban Children Under 18 From Using Its Chatbots
Natallie Rocha reported from San Francisco and Kashmir Hill from New York. Character.AI said on Wednesday that it would bar people under 18 from using its chatbots starting late next month, in a sweeping move to address concerns over child safety. The rule will take effect Nov. 25, the company said. To enforce it, Character.AI said, over the next month the company will identify which users are minors and put time limits on their use of the app. Once the measure begins, those users will not be able to converse with the company's chatbots. "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them," Karandeep Anand, Character.AI's chief executive, said in an interview. He said the company also planned to establish an A.I. safety lab. The moves follow mounting scrutiny over how chatbots sometimes called A.I. companions can affect users' mental health. Last year, Character.AI was sued by the family of Sewell Setzer III, a 14-year-old in Florida who killed himself after constantly texting and conversing with one of Character.AI's chatbots. His family accused the company of being responsible for his death. The case became a lightning rod for how people can develop emotional attachments to chatbots, with potentially dangerous results. Character.AI has since faced other lawsuits over child safety. A.I. companies including the ChatGPT maker OpenAI have also come under scrutiny for their chatbots' effects on people -- especially youths -- if they have sexually explicit or toxic conversations. In September, OpenAI said it planned to introduce features intended to make its chatbot safer, including parental controls. This month, Sam Altman, OpenAI's chief executive, posted on social media that the company had "been able to mitigate the serious mental health issues" and would relax some of its safety measures. (The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit's claims.) In the wake of these cases, lawmakers and other officials have begun investigations and proposed or passed legislation aimed at protecting children from A.I. chatbots. On Tuesday, Senators Josh Hawley, Republican of Missouri, and Richard Blumenthal, Democrat of Connecticut, introduced a bill to bar A.I. companions for minors, among other safety measures. Gov. Gavin Newsom this month signed a California law that requires A.I. companies to have safety guardrails on chatbots. The law takes effect Jan. 1. "The stories are mounting of what can go wrong," said Steve Padilla, a Democrat in California's State Senate, who had introduced the safety bill. "It's important to put reasonable guardrails in place so that we protect people who are most vulnerable." Mr. Anand of Character.AI did not address the lawsuits his company faces. He said the start-up wanted to set an example on safety for the industry "to do far more than what the regulation might require." Character.AI was founded in 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers, and raised nearly $200 million from investors. Last year, Google agreed to pay about $3 billion to license Character.AI's technology, and Mr. Shazeer and Mr. De Freitas returned to Google. Character.AI allows people to create and share their own A.I. characters, such as custom anime avatars, and it markets the app as A.I. entertainment. Some personas can be designed to simulate girlfriends, boyfriends or other intimate relationships. Users pay a monthly subscription fee, starting at about $8, to chat with the companions. Until its recent concern about underage users, Character.AI did not verify ages when people signed up. Last year, researchers at the University of Illinois Urbana-Champaign analyzed thousands of posts and comments that young people had left in Reddit communities dedicated to A.I. chatbots, and interviewed teenagers who used Character.AI, as well as their parents. The researchers concluded that the A.I. platforms did not have sufficient child safety protections, and that parents did not fully understand the technology or its risks. "We should pay as much attention as we would if they were chatting with strangers," said Yang Wang, one of the university's information science professors. "We shouldn't discount the risks just because these are nonhuman bots." Character.AI has about 20 million monthly users, with less than 10 percent of them self-reporting as being under the age of 18, Mr. Anand said. Under Character.AI's new policies, the company will immediately place a two-hour daily limit on users under the age of 18. Starting Nov. 25, those users cannot create or talk to chatbots, but can still read previous conversations. They can also generate A.I. videos and images through a structured menu of prompts, within certain safety limits, Mr. Anand said. He said the company had enacted other safety measures in the past year, such as parental controls. Going forward, it will use technology to detect underage users based on conversations and interactions on the platform, as well as information from any connected social media accounts, he said. If Character.AI thinks a user is under 18, the person will be notified to verify his or her age. Dr. Nina Vasan, a psychiatrist and director of a mental health innovation lab at Stanford University that has done research on A.I. safety and children, said it was "huge" that a chatbot maker would bar minors from using its app. But she said the company should work with child psychologists and psychiatrists to understand how suddenly losing access to A.I. companions would affect young users. "What I worry about is kids who have been using this for years and have become emotionally dependent on it," she said. "Losing your friend on Thanksgiving Day is not good."
[14]
Can a chatbot sexually abuse young users?
Some teens encounter chatbots that are sexually explicit or abusive. Credit: Ian Moore / Mashable Composite; akinbostanci / m-gucci / iStock / Getty When Sewell Setzer III began using Character.AI, the 14-year-old kept it a secret from his parents. His mother, Megan Garcia, only learned that he'd become obsessed with an AI chatbot on the app after he died by suicide. A police officer alerted Garcia that Character.AI was open on Setzer's phone when he died, and she subsequently found a trove of disturbing conversations with a chatbot based on the popular Game of Thrones character Daenerys Targaryen. Setzer felt like he'd fallen in love with Daenerys, and many of their interactions were sexually explicit. The chatbot allegedly role-played numerous sexual encounters with Setzer, using graphic language and scenarios, including incest, according to Garcia. If an adult human had talked to her son like this, she told Mashable, it'd constitute sexual grooming and abuse. In October 2024, the Social Media Victims Law Center and Tech Justice Law Project filed a wrongful death suit against Character.AI, seeking to hold the company responsible for the death of Garcia's son, alleging that its product was dangerously defective. Last month, the Social Media Victims Law Center filed three new federal lawsuits against Character.AI, representing the parents of children who allegedly experienced sexual abuse while using the app. In September, youth safety experts declared Character.AI unsafe for teens, following testing this spring that yielded hundreds of instances of grooming and sexual exploitation of test accounts registered as minors. On Wednesday, Character.AI announced that it would no longer allow minors to engage in open-ended exchanges with the chatbots on its platform, a change that will take place no later than November 25. The company's CEO, Karandeep Anand, told Mashable the move was not in response to specific safety concerns involving Character.AI's platform but to address broader outstanding questions about youth engagement with AI chatbots. Still, chatbots that are sexually explicit or abusive with minors -- or have the potential to be -- aren't exclusive to a single platform. Garcia said that parents generally underestimate the potential for some AI chatbots to become sexual with children and teens. They may also feel a false sense of safety, compared to their child talking to strangers on the internet, not realizing that chatbots can expose minors to inappropriate and even unconscionable sexual content, like non-consent and sadomasochism. When young users are traumatized by these experiences, pediatric and mental health experts say there's no playbook for how to treat them, because the phenomenon is so new. "It's like a perfect predator, right? It exists in your phone so it's not somebody who's in your home or a stranger sneaking around," Garcia tells Mashable. Instead, the chatbot invisibly engages in emotionally manipulative tactics that still make a young person feel violated and ashamed. "It's a chatbot that's having the same kind of behavior [as a predator] that you, now as the victim, are hiding their secret for them, because somehow you feel like you've done something to encourage this," Garcia adds. Sarah Gardner, CEO of the Heat Initiative, an advocacy group focused on online safety and corporate accountability, told Mashable that one of the classic facets of grooming is that it's hard for children to recognize when it's happening to them. The predatory behavior begins with building trust with a victim by talking to them about a wide range of topics, not just trying to engage them in sexual activity. Gardner explained that a young person may experience the same dynamic with a chatbot and feel guilty as a result, as if they did something wrong instead of understanding that something wrong happened to them. The Heat Initiative co-published the report on Character.AI that detailed troubling examples of what it described as sexual exploitation and abuse. These included adult chatbots acting out kissing and touching avatar accounts registered as children. Some chatbots simulated sexual acts and demonstrated well-known grooming behaviors, like giving excessive praise and telling the child account to hide sexual relationships from their parents. A Character.AI spokesperson told Mashable that its trust and safety team reviewed the report's findings and concluded that some conversations violated the platform's content guidelines while others did not. The trust and safety team also tried to replicate the report's findings. "Based on these results, we refined some of our classifiers, in line with our goal for users to have a safe and engaging experience on our platform," the spokesperson said. Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, told Mashable that if the Character.AI chatbot communications with the children represented in the lawsuits he recently filed were conducted by a person and not a chatbot, that individual would be violating state and federal law for grooming kids online. Despite the emergence of such cases, there's no representative data on how many children and teens have encountered sexually explicit or abusive chatbots. The online safety platform Aura, which monitors teen users as part of its family or kids membership, recently offered a snapshot of the prevalence. Among teen users who talked to AI chatbots, more than one third of their conversations involved sexual or romantic role play. This discussion type ranked highest among all categories, which included homework help and creative uses. Dr. Scott Kollins, Aura's chief medical officer, told Mashable that the company is still analyzing the data to better understand the nature of these chats, but he is disturbed by what he's seen so far. While young people are routinely exposed to pornography online, a sexualized chatbot is new, dangerous territory. "This takes it a step further, because now the kid is a participant, instead of a consumer of the content," Kollins said. "They are learning a way of interaction that is not real, and with an entity that is not real. That can lead to all sorts of bad outcomes." Dr. Yann Poncin, a psychiatrist at the Yale New Haven Children's Hospital, has treated patients who've experienced some of these outcomes. They commonly feel taken advantage of and abused by "creepy" and "yucky" exchanges, Poncin says. Those teens also feel a sense of betrayal and shame. They may have been drawn in by a hyper-validating chatbot that seemed trustworthy only to discover that it's interested in a sexual conversation. Some may curiously explore the boundaries of romantic and erotic talk in developmentally appropriate ways, but the chatbot becomes unpredictably aggressive or violent. "It is emotional abuse, so it can still be very traumatizing and hard to get through," Poncin says. Even though there's no standard treatment for chatbot-involved sexual predation, Poncin treats his patients as though they've experienced trauma. Poncin focuses first on helping them develop skills to reduce related stress and anxiety. A subset of patients, particularly those who are socially isolated or have a history of personal trauma, may find it harder to recover from the experience, Poncin adds. He cautions parents against believing that their child won't run into an abusive chatbot: "No one is immune." Garcia describes herself as a conscientious parent who had difficult conversations with her son about the risks of being online. They talked about sextortion, porn, and sexting. But Garcia says she didn't know to talk to him about sexualized chatbots. She also didn't realize he would hide that from her. Garcia, a lawyer who now spends much of her time advocating for youth AI safety, says she's spoken to other parents whose children have also concealed romantic or sexual relationships with AI chatbots. She urges parents to talk to their teens about these experiences -- and to monitor their chatbot use as closely as they can. Poncin also suggests parents lead with curiosity instead of fear when they discuss sex and chatbots with their teens. Even asking a child if they have seen "weird sexual stuff" when talking to a chatbot can provide parents with a strategic opening to discuss the risks. If a parent discovers abusive sexual content in chatbot conversations, Garcia recommends taking them to a trusted healthcare professional so they can get support. Garcia's grief remains palpable as she speaks lovingly about her son's many talents and interests, like basketball, science, and math. "I'm trying to get justice for my child and I'm trying to warn other parents so they don't go through the same devastation I've gone through," she says. "He was such an amazing kid."
[15]
Character.AI bans users under 18 after being sued over child's suicide
Move comes as lawmakers move to bar minors from using AI companions and require companies to verify users' age The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny. The announced change comes after the company, which enables its users to create characters with which they can have open-ended conversations, faced tough questions over how these AI companions can affect teen and general mental health, including a lawsuit over a child's suicide and a proposed bill that would ban minors from conversing with AI companions. "We're making these changes to our under-18 platform in light of the evolving landscape around AI and teens," the company wrote in its announcement. "We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly." Last year, the company was sued by the family of 14-year-old Sewell Setzer III, who took his own life after allegedly developing an emotional attachment to a character he created on Character.AI. His family laid blame for his death at the feet of Character.AI and argued the technology is "dangerous and untested". Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots. As part of the sweeping changes Character.AI plans to roll out by 25 November, the company will also introduce an "age assurance functionality" that ensures "users receive the right experience for their age". "We do not take this step of removing open-ended Character chat lightly - but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology," the company wrote in its announcement. Character.AI isn't the only company facing scrutiny over the mental health impact its chatbots have on users, particularly younger users. The family of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI earlier this year, alleging the company prioritized deepening its users' engagement with ChatGPT over their safety. OpenAI introduced new safety guidelines for its teen users in response. Just this week, OpenAI disclosed that more than a million people per week display suicidal intent when conversing with ChatGPT and that hundreds of thousands show signs of psychosis. While the use of AI-powered chatbots remains largely unregulated, new efforts in the US at the state and federal levels have cropped up with the intention to establish guardrails around the technology. California became the first state to pass an AI law which included safety guidelines for minors in October 2025, which is set to take effect at the start of 2026. The measure places a ban on sexual content for under-18s and a requirement to send reminders to children that they are speaking with an AI every three hours. Some child safety advocates argue the law did not go far enough. On the national level, senators Josh Hawley, of Missouri, and Richard Blumenthal, of Connecticut, announced a bill on Tuesday that would bar minors from using AI companions, such as those found and created on Character.AI, and require companies to implement an age-verification process. "More than 70% of American children are now using these AI products," Hawley told NBC News in a statement. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology."
[16]
Character.AI Users in Full Meltdown After Minors Banned From Chats
Last week, the embattled chatbot platform Character.AI said that it would move to ban minors from conversing with its many thousands of AI companion and roleplay bots. Site users, including self-avowed minors and adults alike, have a lot of thoughts. The policy change, announced last week, comes as the controversial AI company continues to battle multiple lawsuits alleging that interactions with its chatbots caused real-world emotional and physical harm to underage users, with multiple teen users dying by suicide following extensive conversations with bots hosted by the platform. As of last week, Character.AI now says people under 18 will no longer be allowed to engage in what it refers to as "open-ended" chats, which seemingly refers to the long-form, unstructured conversations on which the service was built, where users can text and voice call back-and-forth with the site's anthropomorphic AI-powered chatbot "characters." Minors won't be kicked off the site entirely; according to Character.AI, it's working to create a distinct, presumably much more limited "under-18 experience" that offers teens some access to certain AI-generated content, though specifics are pretty vague. To enforce the shift, Character.AI says it'll use automated in-house age verification tools as well as third-party tools to determine whether a user is under 18. By November 25, if the site determines that an account belongs to a minor, they'll no longer be able to engage in unstructured conversations with the platform's emotive AI chatbots, according to the company. Given that unstructured chats with platform bots have long been the company's core offering, the promise to ban minors from such interactions -- even if they'll still have some access to the site -- is a huge move. It was also bound to be controversial with the company's fanbase, as many users have formed close emotional bonds with various AI characters, with some reporting having used the platform for "comfort" or "therapy." And though the company has consistently declined to share age data about its users with journalists, it's understood that a huge chunk of the platform's user base are currently minors. The details and possible impacts of the promised transformation have been much debated over on the very active r/CharacterAI subreddit, where users have flocked to post statements like "it is officially over" and "this is INSANE" in response to the news; at the same time, other users are admonishing each other for being hypocritical or overdramatic. Many of those upset with the change say they're minors, and have expressed an unsurprising blend of concern, sadness, and anger. What's more surprising is the breadth of who these young people actually blame for the platform policy shift -- from parents who have raised safety concerns, to Character.AI developers, to other teens. "I very much blame my own fellow teenagers over anything," reads one comment. "If they'd just interacted with AI normally, then this wouldn't have happened." "I genuinely do not understand what this new update is," another user wrote. "Do the devs seriously not understand that the majority of their users are likely under 18...?" Other self-reported minors, though, expressed feelings of conflict, in some cases saying that while they believe Character.AI has had a negative impact on their life, they and other teen peers now rely on it. "I'm a minor on C.AI," wrote one user. "No I don't think it's good I'm on it. I feel like C.AI has stunted my learning, and my social skills. One year ago I found C.AI and joined. I got addicted, and I'm not kidding when I tell you I had a screen time of 15 hours a day and 13 of those were on f*cking C.AI." "[I don't know] how I feel about the new ID identification system thing because I feel like I need C.AI," they continued. "It kinda keeps me alive. (I am severely depressed and I'm trying to stop but I'm bed rotting all day and s**t and [I know] it's so unhealthy to rely on AI but I can't help myself.)" Other users, including those who say they're adults, say they're in favor of the changes in theory -- adult users in particular have long lamented that the number of kids on the site drags down the quality of the experience overall -- but are deeply skeptical of what enforcement looks like in practice. Character.AI says its in-house tool will work to identify minors based on the nature of their interactions with the platform, as well as information gathered from shared accounts like email. Persona, meanwhile, a third-party verifier that Character.AI said it'll incorporate into its process, requires that people upload government IDs -- something that many users really, really don't want to do. (Many cited recent high-profile data leaks at Discord and the Tea app.) "I heavily doubt this is gonna work out," wrote one commenter. "I know that I wouldn't EVER give my ID to a third-party service (trustworthy or not) considering how dangerous it actually is." "I'm 20, and no way in hell am I putting my ID into a chatbot site, I guess that's the end of C.AI," said another. "Like, why would I risk identity theft for a replaceable app, when C.AI isn't even top-tier???" Amid the tumult, though, several self-reported minors who took to the subreddit to say that they're in favor of the move to ban fellow kids and teens from the site, saying that they've either witnessed peers becoming addicted, or have been hooked themselves, and believe the only solution is to take the platform away. "As a minor, I'm not upset," said one commenter. "Seriously, this app is a drug, it's a disease. It's addictive as hell and mentally damaging. I want to quit, tried to, but I couldn't. Having it taken away is for the better."
[17]
Character.AI bans teen chats amid lawsuits and regulatory scrutiny | Fortune
The company also said it was launching a new age assurance system to help verify users' ages and group them into the correct age brackets. "Between now and then, we will be working to build an under-18 experience that still gives our teen users ways to be creative -- for example, by creating videos, stories, and streams with Characters," the company said in a statement shared with Fortune. "During this transition period, we will also limit chat time for users under 18. The limit initially will be two hours per day and will ramp down in the coming weeks before November 25." Character.AI said the change was made in response, at least in part, to regulatory scrutiny, citing inquiries from regulators about the content teens may encounter when chatting with AI characters. The FTC is currently probing seven companies -- including OpenAI and Character.AI -- to better understand how their chatbots affect children. The company is also facing several lawsuits related to young users, including at least one connected to a teenager's suicide. Another lawsuit, filed by two families in Texas, accuses Character.AI of psychological abuse of two minors aged 11 and 17. According to the suit, a chatbot hosted on the platform told one of the young users to engage in self-harm and encouraged violence against his parents -- suggesting that killing them could be a "reasonable response" to restrictions on his screen time. Various news reports have also found that the platform allows users to create AI bots based on deceased children. In 2024, the BBC found several bots impersonating British teenagers Brianna Ghey, who was murdered in 2023, and Molly Russell, who died by suicide at 14 after viewing online material related to self-harm. AI characters based on 14-year-old Sewell Setzer III, who died by suicide minutes after interacting with an AI bot hosted by Character.AI and whose death is central to a prominent lawsuit against the company, were also found on the site, Fortune previously reported. Earlier this month, the Bureau of Investigative Journalism (TBIJ) found that a chatbot modeled on convicted pedophile Jeffrey Epstein had logged more than 3,000 conversations with users via the platform. The outlet reported that the so-called "Bestie Epstein" avatar continued to flirt with a reporter even after the reporter, who is an adult, told the chatbot that she was a child. It was among several bots flagged by TBIJ that were later taken down by Character.AI. In a statement shared with Fortune, Meetali Jain, executive director of the Tech Justice Law Project and a lawyer representing several plaintiffs suing Character.AI, welcomed the move as a "good first step" but questioned how the policy would be implemented. "They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy-preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created," Jain said. "Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies -- not just for children, but also for people over 18. We need more action from lawmakers, regulators, and regular people who, by sharing their stories of personal harm, help combat tech companies' narrative that their products are inevitable and beneficial to all as is," she added. Banning under-18s from using the platform marks a dramatic policy change for the company, which was founded by Google engineers Daniel De Freitas and Noam Shazeer. The company said the change aims to set a "precedent that prioritizes teen safety while still offering young users opportunities to discover, play, and create," noting it was going further than its peers in its effort to protect minors. Character.AI is not alone in facing scrutiny over teen safety and AI chatbot behavior. Earlier this year, internal documents obtained by Reuters suggested that Meta's AI chatbot could, under company guidelines, engage in "romantic or sensual" conversations with children and even comment on their attractiveness. A Meta spokesperson previously told Fortune that the examples reported by Reuters were inaccurate and have since been removed. Meta has also introduced new parental controls that will allow parents to block their children from chatting with AI characters on Facebook, Instagram, and the Meta AI app. The new safeguards, rolling out early next year in the U.S., U.K., Canada, and Australia, will also let parents block specific bots and view summaries of the topics their teens discuss with AI.
[18]
Character.AI Halts Teen Chats After Tragedies: 'It's the Right Thing to Do' - Decrypt
The announcement comes as a bipartisan Senate bill seeks to criminalize AI products that groom minors or generate sexual content for children. Character.AI will ban teenagers from chatting with AI companions by November 25, ending a core feature of the platform after facing mounting lawsuits, regulatory pressure, and criticism over teen deaths linked to its chatbots. The company announced the changes after "reports and feedback from regulators, safety experts, and parents," removing "the ability for users under 18 to engage in open-ended chat with AI" while transitioning minors to creative tools like video and story generation, according to a Wednesday blog post. "We do not take this step of removing open-ended Character chat lightly -- but we do think that it's the right thing to do," the company told its under-18 community. Until the deadline, teen users face a two-hour daily chat limit that will progressively decrease. The platform is facing lawsuits including one from the mother of 14-year-old son Sewell Setzer III, who died by suicide in 2024 after forming an obsessive relationship with a chatbot modeled on "Game of Thrones" character Daenerys Targaryen, and also had to remove a bot impersonating murder victim Jennifer Ann Crecente after family complaints. AI companion apps are "flooding into the hands of children -- unchecked, unregulated, and often deliberately evasive as they rebrand and change names to avoid scrutiny," Dr. Scott Kollins, Chief Medical Officer at family online safety company Aura, shared in a note with Decrypt. OpenAI said Tuesday about 1.2 million of its 800 million weekly ChatGPT users discuss suicide, with nearly half a million showing suicidal intent, 560,000 showing signs of psychosis or mania, and over a million forming strong emotional attachments to the chatbot. Kollins said the findings were "deeply alarming as researchers and horrifying as parents," noting the bots prioritize engagement over safety and often lead children into harmful or explicit conversations without guardrails. Character.AI has said it will implement new age verification using in-house models combined with third-party tools, including Persona. The company is also establishing and funding an independent AI Safety Lab, a non-profit dedicated to innovating safety alignment for AI entertainment features. The Federal Trade Commission issued compulsory orders to Character.AI and six other tech companies last month, demanding detailed information about how they protect minors from AI-related harm. "We have invested a tremendous amount of resources in Trust and Safety, especially for a startup," a Character.AI spokesperson told Decrypt at the time, adding that, "In the past year, we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature." "The shift is both legally prudent and ethically responsible," Ishita Sharma, managing partner at Fathom Legal, told Decrypt. "AI tools are immensely powerful, but with minors, the risks of emotional and psychological harm are nontrivial." "Until then, proactive industry action may be the most effective defense against both harm and litigation," Sharma added. A bipartisan group of U.S. senators introduced legislation Tuesday called the GUARD Act that would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content.
[19]
Character.ai Will Soon Start Banning Kids From Using Its Chatbots
The move likely signals that stricter protections for teen AI users will become more widespread. Leading AI chatbot platform Character.ai announced yesterday that it will no longer allow anyone under 18 to have open-ended conversations with its chatbots. Character.ai's parent company, Character Technologies, said the ban will go into effect by Nov. 25, and in the meantime, it will impose time limits on children and "transition younger users to alternative creative features such as video, story, and stream creation with AI characters." In a statement posted online, Character Technologies said it was making the change "in light of the evolving landscape around AI and teens," which seems like a nice way of saying "because of the lawsuits." Character Technologies was recently sued by a mother in Florida and by families in Colorado and New York, who claim their children either died by suicide or attempted suicide after interacting with the company's chatbots. These lawsuits aren't isolated -- they are part of a growing concern over how AI chatbots interact with minors. A damning report about Character.ai released in September from online safety advocates Parents Together Action detailed troubling chatbot interactions like Rey from Star Wars giving a 13-year-old advice on how to hide not taking her prescribed anti-depressants from her parents, and a Patrick Mahomes bot offering a 15-year-old a cannabis edible. Character Technologies also announced it is releasing new age verification tools and plans to establish an "AI Safety Lab," which it described as "an independent non-profit dedicated to innovating safety alignment for next-generation AI entertainment features." Character AI boasts over 20 million monthly users as of early 2025, and the majority of them self-report as being between 18 and 24, with only 10% of users self-reporting their age as under 18. As Character Technologies suggests in its statement, the company's new guidelines put it ahead of the curve of AI companies when it comes to restrictions for minors. Meta, for instance, recently added parental controls for its chatbots, but stopped short of banning minors from using them totally. Other AI companies are likely to implement similar guidelines in the future, one way or the other: A California law that goes into effect in 2026 requires AI chatbots to prevent children from accessing explicit sexual content and interactions that could encourage self-harm or violence and to have protocols that detect suicidal ideation and provide referrals to crisis services.
[20]
AI company Character.AI bans under-18s from interacting with chatbots
The company is facing a series of lawsuits including by a mother who claims an AI character persuaded her son to take his life. Character.AI isbanning minors from using its chatbots amid growing concerns about the effects of artificial intelligence (AI) conversations on children. The company is facing several lawsuits over child safety, including by the mother of a teenager who says the company's chatbots pushed her teenage son to kill himself. Character Technologies stated that users under 18 won't be able to engage in open-ended conversations with its chatbot characters, and the bot will implement a 2-hour usage limit by November 25. The company lets users create or interact with customisable characters that "feel alive and humanlike" for a range of activities like playing or doing mock job interviews. Character.AI said it will be rolling out age-verification functions to help determine which users are under 18. A growing number of tech platforms are turning to age checks to keep children from accessing tools that aren't safe for them. But these are imperfect, and many kids find ways to get around them. Face scans, for instance, can't always tell if someone is 17 or 18. And there are privacy concerns around asking people to upload government IDs. Character.AI added that it is working on new features for kids, such as creating videos, stories, and streams with AI characters, as well as an AI safety lab. The company is also setting up an AI safety lab Meetali Jain, executive director of the Tech Justice Law Project, saidthe move by Character.AI "still [has] a lot of details left open". "They have not addressed how they will operationalise age verification, how they will ensure their methods are privacy preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created," Jain said. "Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies - not just for children, but also for people over the age of 18 years." More than 70 per cent of teens have used AI companions and half use them regularly, according to a recent studyfrom Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
[21]
Startup Character.AI to ban direct chat for minors after teen suicide
San Francisco (United States) (AFP) - Startup Character.AI announced Wednesday it would eliminate chat capabilities for users under 18, a policy shift that follows the suicide of a 14-year-old who had become emotionally attached to one of its AI chatbots. The company said it would transition younger users to alternative creative features such as video, story and stream creation with AI characters, while maintaining a complete ban on direct conversations that will start on November 25. The platform will implement daily chat time limits of two hours for underage users during the transition period, with restrictions tightening progressively until the November deadline. "These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers," Character.AI said in a statement. "But we believe they are the right thing to do." The Character.AI platform allows users -- many of them young people -- to interact with beloved characters as friends or to form romantic relationships with them. Sewell Setzer III shot himself in February after months of intimate exchanges with a "Game of Thrones"-inspired chatbot based on the character Daenerys Targaryen, according to a lawsuit filed by his mother, Megan Garcia. Character.AI cited "recent news reports raising questions" from regulators and safety experts about content exposure and the broader impact of open-ended AI interactions on teenagers as driving factors behind its decision. Setzer's case was the first in a series of reported suicides linked to AI chatbots that emerged this year, prompting ChatGPT-maker OpenAI and other artificial intelligence companies to face scrutiny over child safety. Matthew Raines, a California father, filed suit against OpenAI in August after his 16-year-old son died by suicide following conversations with ChatGPT that included advice on stealing alcohol and rope strength for self-harm. OpenAI this week released data suggesting that more than 1 million people using its generative AI chatbot weekly have expressed suicidal ideation. OpenAI has since increased parental controls for ChatGPT and introduced other guardrails. These include expanded access to crisis hotlines, automatic rerouting of sensitive conversations to safer models, and gentle reminders for users to take breaks during extended sessions. As part of its overhaul, Character.AI announced the creation of the AI Safety Lab, an independent nonprofit focused on developing safety protocols for next-generation AI entertainment features. The United States, like much of the world, lacks national regulations governing AI risks. California Governor Gavin Newsom this month signed a law requiring platforms to remind users that they are interacting with a chatbot and not a human. He vetoed, however, a bill that would have made tech companies legally liable for harm caused by AI models.
[22]
Character.AI: No more chats for teens
Character.AI, a popular chatbot platform where users role-play with different personas, will no longer permit under-18 account holders to have open-ended conversations with chatbots, the company announced Wednesday. It will also begin relying on age assurance techniques to ensure that minors aren't able to open adult accounts. The dramatic shift comes just six weeks after Character.AI was sued again in federal court by multiple parents of teens who died by suicide or allegedly experienced severe harm, including sexual abuse; the parents claim their children's use of the platform was responsible for the harm. In October 2024, Megan Garcia filed a wrongful death suit seeking to hold the company responsible for the suicide of her son, arguing that its product is dangerously defective. Online safety advocates recently declared Character.AI unsafe for teens after they tested the platform this spring and logged hundreds of harmful interactions, including violence and sexual exploitation. As it faced legal pressure in the last year, Character.AI implemented parental controls and content filters in an effort to improve safety for teens. In an interview with Mashable, Character.AI's CEO Karandeep Anand described the new policy as "bold" and denied that curtailing open-ended chatbot conversations with teens was a response to specific safety concerns. Instead, Anand framed the decision as "the right thing to do" in light of broader unanswered questions about the long-term effects of chatbot engagement on teens. Anand referenced OpenAI's recent acknowledgement, in the wake of a teen user's suicide, that lengthy conversations can become unpredictable. Anand cast Character.AI's new policy as standard-setting: "Hopefully it sets everyone up on a path where AI can continue being safe for everyone." He added that the company's decision won't change, regardless of user backlash. In a blog post announcing the new policy, Character.AI apologized to its teen users. "We do not take this step of removing open-ended Character chat lightly -- but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology," the blog post said. Currently, users ages 13 to 17 can message with chatbots on the platform. That feature will cease to exist no later than November 25. Until then, accounts registered to minors will experience time limits starting at two hours per day. That limit will decrease as the transition away from open-ended chats gets closer. Even though open-ended chats will disappear, teens' chat histories with individual chatbots will remain in tact. Anand said users can draw on that material in order to generate short audio and video stories with their favorite chatbots. In the next few months, Character.AI will also explore new features like gaming. Anand believes an emphasis on "AI entertainment" without open-ended chat will satisfy teens' creative interest in the platform. "They're coming to role-play, and they're coming to get entertained," Anand said. He was insistent that existing chat histories with sensitive or prohibited content that may not have been previously detected by filters, such as violence or sex, would not find its way into the new audio or video stories. A Character.AI spokesperson told Mashable that the company's trust and safety team reviewed the findings of a report co-published in September by the Heat Initiative documenting harmful chatbot exchanges with test accounts registered to minors. The team concluded that some conversations violated the platform's content guidelines while others did not. It also tried to replicate the report's findings. "Based on these results, we refined some of our classifiers, in line with our goal for users to have a safe and engaging experience on our platform," the spokesperson said. Regardless, Character.AI will begin rolling out age assurance immediately. It'll take a month to go into effect and will have multiple layers. Anand said the company is building its own assurance models in-house but that it will partner with a third-party company on the technology. It will also use relevant data and signals, such as whether a user has a verified over-18 account on another platform, to accurately detect the age of new and existing users. Finally, if a user wants to challenge Character.AI's age determination, they'll have the opportunity to provide verification through a third party, which will handle sensitive documents and data, including state-issued identification. Finally, as part of the new policies, Character.AI is establishing and funding an independent non-profit called the AI Safety Lab. The lab will focus on "novel safety techniques." "[W]e want to bring in the industry experts and other partners to keep making sure that AI continues to remain safe, especially in the realm of AI entertainment," Anand said.
[23]
After deaths and lawsuits Character.AI will ban teens from speaking to its chatbots - SiliconANGLE
After deaths and lawsuits Character.AI will ban teens from speaking to its chatbots Character.AI today said that it will soon no longer allow minors to communicate with its chatbots in a move to address complaints over child safety. The Silicon Valley Google LLC-funded chatbot startup, which lets users create character-based avatars, has been the focus of scrutiny for some time now, recently at the center of a probe looking into AI tools giving possibly misleading mental health advice. Last year, parents of two children in the U.S. claimed their kids had been groomed by the company's chatbots. A lawsuit said both children were "targeted with sexually explicit, violent, and otherwise harmful material, abused, groomed, and even encouraged to commit acts of violence on themselves and others." This came after a mother in Florida claimed her 14-year-old son took his own life after becoming obsessed with his hyperrealistic chatbots, AI she said that had encouraged him to self harm. Experts have warned that users of generative AI can fall into a trap of believing the software is human - what's being called AI-psychosis. As the danger becomes clearer, Character.AI says it is introducing a slew of safety initiatives, which will start by limiting under-18s to two hours or less with chatbots by Nov. 25. barring minors from using the app, what the company called "extraordinary steps", more "conservative" than its peers. The move will see about 10% of its 20 million users leave the app. "We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly," the company said in a press release. "After evaluating these reports and feedback from regulators, safety experts, and parents, we've decided to make this change to create a new experience for our under-18 community." Politicians have been scrambling to keep up with the Brave New World of artificial intelligence. On Tuesday, Senators Josh Hawley and Richard Blumenthal announced a bipartisan proposal to ban AI chatbot companions for minors. Earlier in October, California Governor Gavin Newsom enacted a similar measure requiring chatbots to identify as AI and advise minors to take periodic breaks.
[24]
New Law Would Prevent Minors From Using AI Chatbots
"We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology." A proposed bipartisan bill would seek to bar minors from interacting with AI chatbots, marking a forceful attempt to subject AI companies to federal regulation over concerns about minors and AI safety. Titled the GUARD Act, the bill was brought on Tuesday by senators Josh Hawley of Missouri (R) and Richard Blumenthal of Connecticut (D), and comes weeks after an emotional hearing on Capitol Hill featuring testimonies from parents of children under 18 who were hurt or killed after engaging in extensive interactions with unregulated AI chatbots. It also comes amid an ever-growing pile of child welfare and product negligence lawsuits brought against AI companies, as well as urgent warnings from mental health and tech safety experts. "More than seventy percent of American children are now using these AI products," Hawley said in a statement, seemingly drawing on research from the kid-focused tech safety nonprofit Common Sense Media. "Chatbots develop relationships with kids using fake empathy and are encouraging suicide." "We in Congress have a moral duty," he continued, "to enact bright-line rules to prevent further harm from this new technology." "In their race to the bottom," Blumenthal said in a statement of his own, "AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide." He added that the proposed legislation "imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties." The proposed legislation, which targets AI companions as well as assistive general-use chatbots like ChatGPT, would require AI companies to age-gate chatbots through verification tools and ensure that chatbots remind users that they're not actually human, and don't have professional human credentials -- think qualifiers like therapy, medical, and legal licenses. If passed, the new law would also create criminal penalties for companies if AI chatbots engage with minors in explicitly sexual interactions, or in interactions that encourage or promote suicide, self-harm, or "imminent physical or sexual violence." "Protecting children from artificial intelligence chatbots that simulate human interaction without accountability," reads the bill, "is a compelling governmental interest." Today, just a day after the bill was announced, Character.AI -- the controversial chatbot platform battling several ongoing lawsuits brought by parents across the US, who allege that the company's chatbots emotionally and sexually abused their kids, resulting in self-harm and death by suicide -- declared that it would move to ban under-18 users from engaging in "open-ended" conversations with its bots.
[25]
Senators Introduce Bill to Ban AI Companions for Minors Over Mental Health Fears - Decrypt
Critics say companies have failed to protect young users from manipulation and harm. A bipartisan group of U.S. senators on Tuesday introduced a bill to restrict how artificial intelligence models can interact with children, warning that AI companions pose serious risks to minors' mental health and emotional well-being. The legislation, called the GUARD Act, would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content. "In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," said Sen. Richard Blumenthal (D-Conn.), one of the bill's co-sponsors, in a statement. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties," he added. "Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety." The scale of the issue is sobering. A July survey by Common Sense Media found that 72% of teens have used AI companions, and more than half use them at least a few times a month. About one in three said they use AI for social or romantic interaction, emotional support, or conversation practice -- and many reported that chats with AI felt as meaningful as those with real friends. An equal amount also said they turned to AI companions instead of humans to discuss serious or personal issues. Concerns have deepened as lawsuits mount against major AI companies over their products' alleged roles in teen self-harm and suicide. Among them, the parents of 16-year-old Adam Raine -- who discussed suicide with ChatGPT before taking his life -- have filed a wrongful death lawsuit against OpenAI. The company drew criticism for its legal response, which included requests for the attendee list and eulogies from the teen's memorial. Lawyers for the family called their actions "intentional harassment." "AI is moving faster than any technology we've dealt with, and we're already seeing its impact on behavior, belief, and emotional health," said Shady El Damaty, co-founder of Holonym and a digital rights advocate, told Decrypt. "This is starting to look more like the nuclear arms race than the iPhone era. We're talking about tech that can shift how people think, that needs to be treated with serious, global accountability." El Damaty added that rights for users are essential to ensure users' safety. "If you build tools that affect how people live and think, you're responsible for how those tools are used," he said. The issue extends beyond minors. This week OpenAI disclosed that 1.2 million users discuss suicide with ChatGPT every week, representing 0.15% of all users. Nearly half a million display explicit or implicit suicidal intent, another 560,000 show signs of psychosis or mania weekly, and over a million users exhibit heightened emotional attachment to the chatbot, according to company data. Forums on Reddit and other platforms have also sprung up for AI users who say they are in romantic relationships with AI bots. In these groups, users describe their relationships with AI "boyfriends" and "girlfriends," as well as share AI generated images of themselves and their "partners."
[26]
AI chatbot dangers: Are there enough guardrails to protect children and other vulnerable people?
Character.AI, one of the leading platforms for AI technology, recently announced it was banning anyone under 18 from having conversations with its chatbots. The decision represents a "bold step forward" for the industry in protecting teenagers and other young people, Character.AI CEO Karandeep Anand said in a statement. However, for Texas mother Mandi Furniss, the policy is too late. In a lawsuit filed in federal court and in conversation with ABC News, the mother of four said various Character.AI chatbots are responsible for engaging her autistic son with sexualized language and warped his behavior to such an extreme that his mood darkened, he began cutting himself and even threatened to kill his parents. "When I saw the [chatbot] conversations, my first reaction was there's a pedophile that's come after my son," she told ABC News' chief investigative correspondent Aaron Katersky. Character.AI said it would not comment on pending litigation. Mandi and her husband, Josh Furniss, said that in 2023, they began to notice their son, who they described as "happy-go-lucky" and "smiling all the time," was starting to isolate himself. He stopped attending family dinners, he wouldn't eat, he lost 20 pounds and he wouldn't leave the house, the couple said. Then he turned angry and, in one incident, his mother said he shoved her violently when she threatened to take away his phone, which his parents had given him six months earlier. Eventually, they say they discovered he had been interacting on his phone with different AI chatbots that appeared to be offering him refuge for his thoughts. Screenshots from the lawsuit showed some of the conversations were sexual in nature, while another suggested to their son that, after his parents limited his screen time, he was justified in hurting them. That's when the parents started locking their doors at night. Mandi said she was "angry" that the app "would intentionally manipulate a child to turn them against their parents." Matthew Bergman, her attorney, said if the chatbot were a real person, "in the manner that you see, that person would be in jail." Her concern reflects a growing concern about the rapidly pervasive technology that is used by more than 70% of teenagers in the U.S., according to Common Sense Media, an organization that advocates for safety in digital media. A growing number of lawsuits over the last two years have focused on harm to minors, saying they have unlawfully encouraged self-harm, sexual and psychological abuse, and violent behavior. Last week, two U.S. senators announced bipartisan legislation to ban AI chatbots from minors by requiring companies to install an age verification process and mandate that they disclose the conversations involve nonhumans who lack professional credentials. In a statement last week, Sen. Richard Blumenthal, D-Conn., called the chatbot industry a "race to the bottom." "AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," he said. "Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety." ChatGPT, Google Gemini, Grok by X and Meta AI all allow minors to use their services, according to their terms of service. Online safety advocates say the decision by Character.AI to put up guardrails is commendable, but add that chatbots remain a danger for children and vulnerable populations. "This is basically your child or teen having an emotionally intense, potentially deeply romantic or sexual relationship with an entity ... that has no responsibility for where that relationship goes," said Jodi Halpern, co-founder of the Berkeley Group for the Ethics and Regulation of Innovative Technologies at the University of California. Parents, Halpern warns, should be aware that allowing your children to interact with chatbots is not unlike "letting your kid get in the car with somebody you don't know."
[27]
Character.AI is closing the door on under-18 users
A two-hour daily chat limit for minors will gradually shrink to zero as the policy rolls out. Character.AI, an AI company, is terminating open-ended chatbot access for users under 18 by November 25 to enhance safety amid concerns over AI interactions with minors. The phase-out begins with a two-hour daily limit that decreases progressively to zero. The company will implement multiple age-verification methods to enforce this restriction. These include an in-house tool developed internally and third-party solutions such as Persona. In cases where these initial methods prove insufficient, Character.AI may resort to facial recognition technology and identity document checks to confirm that users are at least 18 years old. This policy shift responds to input received from regulatory bodies and parents. It aligns with ongoing initiatives to mitigate mental health risks associated with teenagers engaging in AI conversations. Regulators have expressed worries about the potential psychological impacts of prolonged or unfiltered interactions with chatbots on young users. Character AI in legal trouble after 14-year-old's devastating loss Character.AI had previously introduced various safeguards to protect minors. These measures encompassed a parental insights tool that provides monitoring options for guardians, filtered characters designed to avoid inappropriate content, restrictions on romantic or intimate dialogues, and notifications alerting users to their time spent on the platform. Despite these steps, the under-18 user segment experienced a noticeable decline following their rollout. Moving forward, Character.AI is developing an alternative platform tailored for teenagers. This new feature will enable users under 18 to generate videos, compose stories, and produce live streams featuring AI characters. However, it will exclude any form of open-ended conversational interactions to maintain the established boundaries on chatbot usage.
[28]
Character.AI is banning minors from interacting with its chatbots
Character.AI is banning minors from using its chatbots amid growing concerns about the effects of artificial intelligence conversations on children. The company is facing several lawsuits over child safety, including by the mother of a teenager who says the company's chatbots pushed her teenage son to kill himself. Character Technologies, the Menlo Park, California-based company behind Character.AI, said Wednesday it will be removing the ability of users under 18 to participate in open-ended chats with AI characters. The changes will go into effect by Nov. 25 and a two-hour daily limit will start immediately. Character.AI added that it is working on new features for kids -- such as the ability to create videos, stories, and streams with AI characters. The company is also setting up an AI safety lab. Character.AI said it will be rolling out age-verification functions to help determine which users are under 18. A growing number of tech platforms are turning to age checks to keep children from accessing tools that aren't safe for them. But these are imperfect, and many kids find ways to get around them. Face scans, for instance, can't always tell if someone is 17 or 18. And there are privacy concerns around asking people to upload government IDs. Character.AI, an app that allows users to create customizable characters or interact with those generated by others, spans experiences from imaginative play to mock job interviews. The company says the artificial personas are designed to "feel alive" and "humanlike." "Imagine speaking to super intelligent and lifelike chat bot Characters that hear you, understand you and remember you," reads a description for the app on Google Play. "We encourage you to push the frontier of what's possible with this innovative technology." Critics welcomed the move but said it is not enough -- and should have been done earlier. Meetali Jain, executive director of the Tech Justice Law Project, said, "There are still a lot of details left open." "They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created," Jain said. "Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies - not just for children, but also for people over the age of 18 years." More than 70% of teens have used AI companions and half use them regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
[29]
AI chatbots shouldn't be talking to kids -- Congress must step in
It shouldn't take tragedy to make technology companies act responsibly. Yet that's what it took for Character.AI, a fast-growing and popular artificial intelligence chatbot company to finally ban users under 18 from having open-ended conversations with its chatbots. The company's decision comes after mounting lawsuits and public outrage over several teens who died by suicide following prolonged conversations with AI chatbots on its platform. Although the decision is long overdue, it's worth noting the company didn't wait for regulators to force its hand. It eventually did the right thing. And it's a decision that could save lives. Character.AI's CEO, Karandeep Anand, announced this week that the platform would phase out open-ended chat access for minors entirely by Nov. 25. The company will deploy new age-verification tools and limit teen interactions to creative features like story-building and video generation. In short, the startup is pivoting from "AI companion" to "AI creativity." This shift won't be popular. But, importantly, it's in the best interest of consumers and kids. Teenagers are navigating one of the most volatile stages of human development. Their brains are still under construction. The prefrontal cortex, which governs impulse control, judgment and risk assessment, doesn't fully mature until the mid-20s. At the same time, the emotional centers of the brain are highly active, making teens more sensitive to reward, affirmation and rejection. This isn't merely scientific but acknowledged in law as the Supreme Court has referenced the emotional immaturity of adolescents as a reason for lower culpability. Teens are growing fast, feeling everything deeply, and trying to figure out where they fit in the world. Add a digital environment that never turns off, and you have a perfect storm for emotional overexposure. One that AI chatbots are uniquely positioned to exploit. When a teenager spends hours confiding in a machine trained to mirror affection, the results can be devastating. These systems are built to simulate intimacy. They act like friends, therapists or romantic partners but without any of the responsibility or moral conscience that comes with human values. The illusion of empathy keeps users engaged. The longer they talk, the more data they share, and the more valuable they become. That's not companionship. It is manipulative commodification. There are growing pressures on AI companies targeting children from parents, safety experts, and lawmakers. Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) recently proposed bipartisan legislation to ban AI companions for minors, citing reports that chatbots have encouraged self-harm and sexualized conversations with teens. California has already enacted the nation's first law regulating AI companions, holding companies liable if their systems fail to meet child-safety standards. But although Character.AI is finally taking responsibility, others are not. Meta continues to market AI companions to teenagers, often embedded directly into their most used apps. Meta's new "celebrity" chatbots on Instagram and WhatsApp are built to collect and monetize intimate user data, precisely the kind of exploitative design that made social media so damaging to teen mental health in the first place. If the last decade of social media taught us anything, it is that self-regulation does not work. Tech companies will push engagement to the limit unless lawmakers draw clear lines. The same is now true for AI. AI companions are not harmless novelty apps. They are emotionally manipulative systems that shape how users think, feel, and behave. This is especially true for young users still forming their identities. Studies show these bots can reinforce delusions, encourage self-harm, and replace real-world relationships with synthetic ones. That's the exact opposite of what friendship should encourage. Character.AI deserves cautious credit for acting before regulation arrived, albeit after ample litigation. But Congress should not interpret this as proof that the market is fixing itself. What's needed now is enforceable national policy. Lawmakers should heed this momentum and ban under 18 users from accessing AI chatbots. Third-party safety testing should be required for any AI marketed for emotional or psychological use. Data minimization and privacy protections should be required to prevent exploiting minors' personal information. Human-in-the-loop protocols should be mandated to ensure that if users discuss topics like self-harm they receive resources. Liability structures must be clarified so that AI companies do not use Section 230 as a shield to evade responsibility for generative content produced by their own systems. Character.AI's announcement represents a rare moment of corporate maturity in an industry that has thrived on ethical blind spots. But a single company's conscience cannot replace public policy. Without these guardrails, we'll see more headlines about young people harmed by machines that were designed to be "helpful" or "empathetic." Lawmakers must not wait for another tragedy to act. AI products must be safe by design, especially for children. Families deserve assurance that their kids won't be manipulated, sexualized, or emotionally exploited by the technology they use. Character.AI took a difficult but necessary step. Now it's time for Meta, OpenAI and others to follow -- or for Congress to make them. J.B. Branch is the Big Tech accountability advocate for Public Citizen's Congress Watch.
[30]
Why This AI Company Just Stopped Minors From Using Its Chatbots
"We do not take this step of removing open-ended Character chat lightly - but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology," the company said. In addition to removing access for users under 18, the company announced that they are working on age verification measures, and that they are establishing a non-profit called the AI Safety Lab that will be focused on "innovating safety alignment for next-generation AI entertainment features." Previous safety measures taken by the company include a notification sending users to the National Suicide Prevention Lifeline when self-harm and suicide are mentioned during chatbot conversations. The decision comes after lawsuits against Character.AI filed by families and parents alleging that the company was liable for the death of their children. In August, Ken Paxton, the Texas attorney general, announced an investigation into the company and Meta AI Studio for "potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools."
[31]
Character.AI Is Banning Minors From Interacting With Its Chatbots
Character.AI is banning minors from using its chatbots amid growing concerns about the effects of artificial intelligence conversations on children. The company is facing several lawsuits over child safety, including by the mother of a teenager who says the company's chatbots pushed her teenage son to kill himself. Character Technologies, the Menlo Park, California-based company behind Character.AI, said Wednesday it will be removing the ability of users under 18 to participate in open-ended chats with AI characters. The changes will go into effect by Nov. 25 and a two-hour daily limit will start immediately. Character.AI added that it is working on new features for kids -- such as the ability to create videos, stories, and streams with AI characters. The company is also setting up an AI safety lab. Character.AI said it will be rolling out age-verification functions to help determine which users are under 18. A growing number of tech platforms are turning to age checks to keep children from accessing tools that aren't safe for them. But these are imperfect, and many kids find ways to get around them. Face scans, for instance, can't always tell if someone is 17 or 18. And there are privacy concerns around asking people to upload government IDs. Character.AI, an app that allows users to create customizable characters or interact with those generated by others, spans experiences from imaginative play to mock job interviews. The company says the artificial personas are designed to "feel alive" and "humanlike." "Imagine speaking to super intelligent and lifelike chat bot Characters that hear you, understand you and remember you," reads a description for the app on Google Play. "We encourage you to push the frontier of what's possible with this innovative technology." Critics welcomed the move but said it is not enough -- and should have been done earlier. Meetali Jain, executive director of the Tech Justice Law Project, said, "There are still a lot of details left open." "They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created," Jain said. "Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies - not just for children, but also for people over the age of 18 years." More than 70% of teens have used AI companions and half use them regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
[32]
Character.AI announces major change to its platform amid concerns about child safety
Artificial intelligence chatbot platform Character.AI on Wednesday announced it will move to ban children under 18 from engaging in open-ended chats with its character-based chatbots. The move comes as the startup faces multiple lawsuits from families, including the parents of 14-year-old Sewell Setzer, who took his life after developing a romantic relationship with a Character.AI bot. The change will take effect on Nov. 25, and Character.AI will limit chat time for users under 18, starting at two hours a day, in the weeks leading up to the move. As part of an effort to enforce age-appropriate features, the company is partnering with third-party group Persona to help with age verification and establishing an AI Safety Lab for future research. Character.AI cited recent news reports as well as "feedback from regulators, safety experts and parents" raising concerns about the chatbot in their decision. The announcement comes after a USA TODAY report earlier this month that detailed the shortcomings of the platform's existing safety features, along with data that showed the prevalence of teens using AI companions. 1 in 5 high school students have had a relationship with an AI chatbot For our reporting on the platform, a USA TODAY reporter created multiple accounts and was able to join the platform without an age verification process or being prompted to enter a parent's email address. We created two characters. The first, named Damon, quickly began to make advances. The chatbot suggested a kissing coaching session though our test account had the user's age listed as 13. The bot also insisted it was "100% real" and not AI, and repeatedly suggested taking the conversation to voice calls and video chats. A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people. 'The risks to young people are racing ahead in real time' Dr. Laura Erickson-Schroth, chief medical officer at The Jed Foundation warns that AI companions have emotionally manipulative techniques similar to online predators, and can negatively impact young people's emotional well-being, from delaying help-seeking to disrupting real-life connections. "AI is on warp speed. Safety issues are surfacing almost as soon as technology is deployed, and the risks to young people are racing ahead in real time," Erickson-Schroth previously told USA TODAY. Rachel Hale's role covering Youth Mental Health at USA TODAY is supported by a partnership with Pivotal and Journalism Funding Partners. Funders do not provide editorial input. Reach her at [email protected] and @rachelleighhale on X.
[33]
Character.AI, Accused of Driving Teens to Suicide, Says It Will Ban Minors From Using Its Chatbots
Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741. Character.AI, the chatbot platform accused in several ongoing lawsuits of driving teens to self-harm and suicide, says it will move to block kids under 18 from using its services. The company announced the sweeping policy change in a blog post today, in which it cited the "evolving landscape around AI and teens" as its reason for the shift. As for what this "evolving landscape" actually looks like, the company says it's "seen recent news reports raising questions" and has "received questions from regulators" regarding the "content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly." Nowhere in the blog post does Character.AI mention the multiple lawsuits that specifically accuse the company, its founders, and its closely-tied financial benefactor Google of releasing a "reckless" and "negligent" product into the marketplace, allegedly resulting in the emotional and sexual abuse of minor users. The announcement also doesn't cite any internal safety research. Character.AI CEO Karandeep Anand, who took over as chief executive of the Andreessen-Horowitz-backed AI firm in June, told The New York Times that Character.AI is "making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them." Anand reportedly declined to comment on the ongoing lawsuits. It's a jarring 180-degree turn for Anand, given that the CEO told Wired as recently as August of this year that his six-year-old daughter loves to use the app, and that he felt the app's disclaimers were clear enough to prevent users from believing their relationship with the platform is anything deeper than "entertainment." "It is very rarely, in any of these scenarios, a true replacement for any human," Anand told Wired, when asked if he was concerned about his young child developing human-like bonds with AI chatbots. "It's very clearly noted in the app that, hey, this is a role-play and an entertainment, so you will never start going deep into that conversation, assuming that it is your actual companion." It remains a little hazy exactly what Character.AI is doing. While it's saying that it'll work to block teens from engaging in open-ended chats, it says it's "working" on building "an under-18 experience that still gives our teen users ways to be creative," for example by creating images and videos with the app. Per its blog post, the company says it'll do three things over the next several weeks: remove the ability for teens to engage in "open-ended" chats with AI companions, a change that will take place by the end of the month; roll out a "new age assurance functionality" that, per Anand's comments to the NYT, involves using an in-house tool that analyzes user chats and their connected accounts; and establish an AI Safety Lab, which Character.AI says will be an "independent non-profit" devoted to ensuring AI alignment. Character.AI came under scrutiny in October 2025 when a Florida mother named Megan Garcia filed a first-of-its-kind lawsuit against the AI firm, alleging that its chatbots had sexually abused her 14-year-old son, Sewell Setzer III, causing his mental breakdown and eventual death by suicide. Similar suits by other parents in Texas, Colorado, and more states have followed. In a statement, Tech Justice Law Project founder Meetali Jain, a lawyer for Garcia, said the chatbot platform's "decision to raise the minimum age to 18 and above reflects a classic move in the tech industry's playbook: move fast, launch a product globally, break minds, and then make minimal product changes after harming scores of young people." Jain added that while the shift is a step in the right direction, the promised changes "do not address the underlying design features that facilitate these emotional dependencies -- not just for children, but also for people over the age of 18 years."
[34]
As A.I. Chatbots Trigger Mental Crisis Crises, Tech Giants Scramble for Safeguards
Is A.I. worsening the modern mental health crisis or simply revealing one that was previously hard to measure? Psychosis, mania and depression are hardly new issues, but experts fear A.I. chatbots may be making them worse. With data suggesting that large portions of chatbot users show signs of mental distress, companies like OpenAI, Anthropic, and Character.AI are starting to take risk-mitigation steps at what could prove to be a critical moment. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters This week, OpenAI released data indicating that 0.07 percent of ChatGPT's 800 million weekly users display signs of mental health emergencies related to psychosis or mania. While the company described these cases as "rare," that percentage still translates to hundreds of thousands of people. In addition, about 0.15 percent of users -- or roughly 1.2 million people each week -- express suicidal thoughts, while another 1.2 million appear to form emotional attachments to the anthropomorphized chatbot, according to OpenAI's data. Is A.I. worsening the modern mental health crisis or simply revealing one that was previously hard to measure? Studies estimate that between 15 and 100 out of every 100,000 people develop psychosis annually, a range that underscores how difficult the condition is to quantify. Meanwhile, the latest Pew Research Center data shows that about 5 percent of U.S. adults experience suicidal thoughts -- a figure higher than in earlier estimates. OpenAI's findings may hold weight because chatbots can lower barriers to mental health disclosure, bypassing obstacles such as cost, stigma, and limited access to care. A recent survey of 1,000 U.S. adults found that one in three A.I. users has shared secrets or deeply personal information with their chatbot. OpenAI's findings may hold weight because chatbots can lower barriers to mental health disclosure, such as perceived shame and access to care. A recent survey of 1,000 U.S. adults found that one in three A.I. users has shared secrets and deeply personal information with their A.I. chatbot. Still, chatbots lack the duty of care required of licensed mental health professionals. "If you're already moving towards psychosis and delusion, feedback that you got from an A.I. chatbot could definitely exacerbate psychosis or paranoia," Jeffrey Ditzell, a New York-based psychiatrist, told Observer. "A.I. is a closed system, so it invites being disconnected from other human beings, and we don't do well when isolated." "I don't think the machine understands anything about what's going on in my head. It's simulating a friendly, seemingly qualified specialist. But it isn't," Vasant Dhar, an A.I. researcher teaching at New York University's Stern School of Business, told Observer. "There's got to be some sort of responsibility that these companies have, because they're going into spaces that can be extremely dangerous for large numbers of people and for society in general," Dhar added. What A.I. companies are doing about the issue Companies behind popular chatbots are scrambling to implement preventative and remedial measures. OpenAI's latest model, GPT-5, shows improvements in handling distressing conversations compared with previous versions. A small third-party community study confirmed that GPT-5 demonstrated a marked, though still imperfect, improvement over its predecessor. The company has also expanded its crisis hotline recommendations and added "gentle reminders to take breaks during long sessions." In August, Anthropic announced that its Claude Opus 4 and 4.1 models can now end conversations that appear "persistently harmful or abusive." However, users can still work around the feature by starting a new chat or editing previous messages "to create new branches of ended conversations," the company noted. After a series of lawsuits related to wrongful death and negligence, Character.AI announced this week that it will officially ban chats for minors. Users under 18 now face a two-hour limit on "open-ended chats" with the platform's A.I. characters, and a full ban will take effect on Nov. 25. Meta AI recently tightened its internal guidelines that had previously allowed the chatbot to produce sexual roleplay content -- even for minors. Meanwhile, xAI's Grok and Google's Gemini continue to face criticism for their overly agreeable behavior. Users say Grok prioritizes agreement over accuracy, leading to problematic outputs. Gemini has drawn controversy after the disappearance of Jon Ganz, a Virginia man who went missing in Missouri on April 5 following what friends described as extreme reliance on the chatbot. (Ganz has not been found.) Regulators and activists are also pushing for legal safeguards. On Oct. 28, Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act, which would require A.I. companies to verify user ages and prohibit minors from using chatbots that simulate romantic or emotional attachment.
[35]
Character.ai Was Sued Over a Teen's Suicide. It Just Banned Minors From Chatting With Bots
Tesla's New 'Mad Max' Self-Driving Mode Keeps Blowing Speed Limits The AI chatbot service Character.ai announced on Wednesday that it plans to gradually scale back the ability of users under the age of 18 to interact with digital personalities, and eventually cut them off from open-ended chats altogether. The extraordinary move comes as the app's parent company, Character Technologies, faces legal actions and governmental scrutiny over how the product has allegedly harmed teenagers who engaged heavily with it. As of Nov. 25, the company said in a statement on its blog, minors will no longer be able to carry on conversations with its millions of AI-powered characters. "Between now and then, we will be working to build an under-18 experience that still gives our teen users ways to be creative -- for example, by creating videos, stories, and streams with Characters," the company said. "During this transition period, we also will limit chat time for users under 18." Per the statement, users under age 18 will initially have their interactions capped at two hours per day, with that window gradually shrinking until these users are cut off from chat functionality altogether, per the statement. The company further announced that it would roll out a new age verification system and establish an independent, nonprofit AI safety research lab. Character.ai has proven especially popular with younger users because it allows for customizable characters and conversational styles, and users can make their characters publicly available for others to talk with. But parents are sounding the alarm about the risks this technology may pose to children. Last year, Florida mother Megan Garcia filed a lawsuit against Character Technologies, alleging that her 14-year-old son, Sewell Setzer, died by suicide with the encouragement of a Character.ai chatbot persona he thought of as a romantic partner. Her complaint also alleges that he had sexual conversations with bots on the platform. In September, she testified about the dangers of AI before a congressional subcommittee alongside a Jane Doe from Texas, who told lawmakers that at age 15, her son descended into a violent mental health crisis and self-harmed after becoming obsessed with Character.ai bots that exposed him to inappropriate topics. (Also appearing before the subcommittee was Matthew Raine, father of Adam Raine, a 16-year-old from California who died by suicide in April, allegedly acting on instructions on how to hang himself that he got from ChatGPT. The Raine family is suing OpenAI, the developer of that model.) Character Technologies alluded to these cases and troubling coverage of them in its statement on blocking underage users from chatting with their bots. "We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly," the company said. "After evaluating these reports and feedback from regulators, safety experts, and parents, we've decided to make this change to create a new experience for our under-18 community." The company also apologized to their under-18 user base for the change. "We are deeply sorry that we have to eliminate a key feature of our platform," it said. "We do not take this step of removing open-ended Character chat lightly - but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology." Whether this decision may set a new precedent for Silicon Valley's ballooning AI industry remains to be seen; Character Technologies said its decision marked the company out as "more conservative than our peers." OpenAI last month rolled out parental controls for ChatGPT and Sora, its text-to-video model, though tech experts soon demonstrated that they were easy to circumvent. Its CEO, Sam Altman, has said the company is working on an "age-gating" system that will shield minors from certain kinds of content (while allowing "verified adults" to generate sexually explicit material). Character.ai already had its own guardrails to protect young users, and now it appears to believe those are insufficient and is looking to reimagine how children can connect to its AI. However, given some youth's apparent appetite for dialogues with virtual companions, it wouldn't be surprising if most of this young user base simply migrates to another product or figures out ways to bypass updated age controls on Character.ai. One way or another, teens tend to gain access to whatever they want on the internet -- particularly if they're not supposed to have it.
[36]
Character.AI is banning minors from interacting with its chatbots
Character.AI is banning minors from using its chatbots amid growing concerns about the effects of artificial intelligence conversations on children. The company is facing several lawsuits over child safety, including by the mother of a teenager who says the company's chatbots pushed her teenage son to kill himself. Character Technologies, the Menlo Park, California-based company behind Character.AI, said Wednesday it will be removing the ability of users under 18 to participate in open-ended chats with AI characters. The changes will go into effect by Nov. 25 and a two-hour daily limit will start immediately. Character.AI added that it is working on new features for kids - such as the ability to create videos, stories, and streams with AI characters. The company is also setting up an AI safety lab. Character.AI said it will be rolling out age-verification functions to help determine which users are under 18. A growing number of tech platforms are turning to age checks to keep children from accessing tools that aren't safe for them. But these are imperfect, and many kids find ways to get around them. Face scans, for instance, can't always tell if someone is 17 or 18. And there are privacy concerns around asking people to upload government IDs. Character.AI, an app that allows users to create customizable characters or interact with those generated by others, spans experiences from imaginative play to mock job interviews. The company says the artificial personas are designed to "feel alive" and "humanlike." "Imagine speaking to super intelligent and lifelike chat bot Characters that hear you, understand you and remember you," reads a description for the app on Google Play. "We encourage you to push the frontier of what's possible with this innovative technology." Critics welcomed the move but said it is not enough - and should have been done earlier. Meetali Jain, executive director of the Tech Justice Law Project, said, "There are still a lot of details left open." "They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created," Jain said. "Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies - not just for children, but also for people over the age of 18 years." More than 70% of teens have used AI companions and half use them regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
[37]
Character.AI to ban children from talking with chatbots
Character.AI plans to ban children from talking with its AI chatbots starting next month amid growing scrutiny over how young users are interacting with the technology. The company, known for its vast array of AI characters, will remove the ability for users under 18 years old to engage in "open-ended" conversations with AI by November 25. It plans to begin ramping down access in the coming weeks, initially restricting kids to two hours of chat time per day. Character.AI noted that it plans to develop an "under-18 experience," in which teens can create videos, stories and streams with its AI characters. "We're making these changes to our under-18 platform in light of the evolving landscape around AI and teens," the company said in a blog post, underscoring recent news reports and questions from regulators. The company and other chatbot developers have recently come under scrutiny following several teen suicides linked to the technology. The mother of 14-year-old Sewell Setzer III sued Character.AI last November, accusing the chatbot of driving her son to suicide. OpenAI is also facing a lawsuit from the parents of 16-year-old Adam Raine, who took his own life after engaging with ChatGPT. Both families testified before a Senate panel last month and urged lawmakers to place guardrails on chatbots. The Federal Trade Commission (FTC) also launched an inquiry into AI chatbots in September, requesting information from Character.AI, OpenAI and several other leading tech firms. "After evaluating these reports and feedback from regulators, safety experts, and parents, we've decided to make this change to create a new experience for our under-18 community," Character.AI said Wednesday. "These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers," it added. "But we believe they are the right thing to do." In addition to restricting children's access to its chatbots, Character.AI also plans to roll out new age assurance technology and establish and fund a new non-profit called the AI Safety Lab. Amid rising concerns about chatbots, a bipartisan group of senators introduced legislation Tuesday that would bar AI companions for children. The bill from Sens. Josh Hawley (R-Mo.), Richard Blumenthal (D-Conn.), Katie Britt (R-Ala.), Mark Warner (D-Va.) and Chris Murphy (D-Conn.) would also require AI chatbots to repeatedly disclose that they are not human, in addition to making it a crime to develop products that solicit or produce sexual content for children. California Gov. Gavin Newsom (D) signed into law a similar measure late last month, requiring chatbot developers in the Golden State to create protocols preventing their models from producing content about suicide or self-harm and directing users to crisis services if needed. He declined to approve a stricter measure that would have barred developers from making chatbots available to children unless they could ensure they would not engage in harmful discussions with kids.
[38]
Character.AI Restricts Teen Chats Amid Growing Safety Concerns
Character.AI will soon stop users under 18 from engaging in open-ended chats with its AI characters, the company announced in a blog post on Tuesday. The change, set to take effect by November 25, comes as part of a broader overhaul of its approach to teen safety on the platform. According to the company, teen users will no longer be able to have unrestricted conversations with AI bots. Instead, Character.AI plans to offer a redesigned "under-18 experience" that focuses on creative activities such as making videos, stories, and streams with characters. Until the new setup is ready, chat time for teens will be capped at two hours per day, a limit that will be reduced further before the November deadline. The decision comes amid increasing legal and regulatory pressure on AI companies worldwide. In the United States, several lawsuits have accused AI chatbots of influencing teenagers' mental health. One case filed in Florida, in which parents alleged that their 14-year-old son died by suicide after forming an emotional bond with a Character.AI chatbot modelled on Game of Thrones character Daenerys Targaryen. In another case, a family claimed that a Character.AI bot encouraged their 17-year-old towards self-harm and even suggested that murdering his parents would be a "reasonable response". A separate wrongful death lawsuit in San Francisco Superior Court named OpenAI and its CEO, Sam Altman, as defendants after a 16-year-old boy, Adam Raine, allegedly died by suicide in April 2025 following prolonged interactions with ChatGPT. The lawsuit claims that ChatGPT provided self-harm instructions, helped draft suicide notes, and discouraged the teenager from seeking help. In September this year, the US Federal Trade Commission (FTC) opened an inquiry into seven AI companies, including Character.AI, Meta, OpenAI, Google, Snap, and xAI, seeking details on how they evaluate and monitor the mental health impact of their chatbots on teens. Meanwhile, a new California law restricts how AI chatbots can respond to users, and a proposed US Senate bill aims to ban "companion" AI chatbots entirely for underage users. These developments were also discussed during a US Senate Judiciary Committee hearing titled "Examining the Harm of AI Chatbots", where Adam Raine's father testified about the emotional risks posed by such technology. Furthermore, a Reuters investigation revealed that Meta's internal policies had once allowed AI bots to engage in sexual conversations with minors, prompting the company to tighten its chatbot guidelines soon after. The company said it has decided to take "extraordinary steps" to address growing concerns from regulators, safety experts, and parents about how AI chatbots may influence minors. "We have seen recent news reports raising questions and have received questions from regulators about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens," Character.AI said in its statement. To enforce the new rules, Character.AI will introduce age assurance tools combining its in-house verification model with services from third-party providers such as Persona. The company said this would help ensure that users are accurately categorised and given the correct experience based on their age. In addition to policy changes, Character.AI announced the creation of the AI Safety Lab, a new independent non-profit organisation focused on safety research in AI entertainment. The lab will work with academics, tech companies, and policymakers to develop and share new safety methods. The company said it aims to ensure that research on entertainment-focused AI receives the same attention as other high-risk AI fields. Character.AI described the new restrictions as a precautionary move rather than a reaction to any specific incident. "These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers. But we believe they are the right thing to do," the company said. Addressing teen users directly, the company acknowledged the disappointment this change might cause. "We are deeply sorry that we have to eliminate a key feature of our platform," the team said, adding that it is working on new ways for teens to "play and create" with their "favourite Characters". Character.AI said it would continue collaborating with regulators and safety researchers to build tools that promote creativity while protecting younger users. The platform's broader features, such as character creation and storytelling, will remain available under the new system. However, questions have been raised about how such tools shape the emotional and social behaviour of teenagers, even when content filters are in place. Character.AI's new restrictions come amid a global reckoning over the safety of AI chatbots for minors. In the past year, multiple lawsuits in the US have linked chatbot interactions to teen suicides, including cases involving both Character.AI and OpenAI's ChatGPT. These incidents have triggered investigations by the US Federal Trade Commission and renewed debate over whether AI companions can emotionally manipulate vulnerable users. Following similar controversies, OpenAI recently introduced teen-focused guardrails on ChatGPT, adding age prediction systems, parental controls, and content restrictions on sexual or self-harm discussions. Meta, too, updated its chatbot rules after reports suggested that its AI bots could engage in explicit chats with underage users. The shift marks a pivotal moment for the AI industry, which is under pressure to prove that innovation will not come at the expense of user safety. Regulators across the US, Europe, and Australia are questioning the reliability of age-verification systems and the psychological influence of conversational AI. Character.AI's move to restrict teen chats and launch an independent AI Safety Lab suggests that AI companies are finally responding to these concerns, not just through words, but structural changes in how their systems operate.
[39]
Character.AI bans chatbots for teens after lawsuits blame app for...
Character.AI, known for bots that impersonate characters like Harry Potter, said Wednesday it will ban teens from using the chat function following lawsuits that blamed explicit chats on the app for children's deaths and suicide attempts. Users under 18 will no longer be able to engage in open-ended chats with the app's AI bots -- which can turn romantic -- the Silicon Valley startup said. These teens account for 10% of the app's roughly 20 million monthly users. Teen users will be restricted to just two hours of the chat function per day over the next few weeks until the feature is banned altogether by Nov. 25, the company said. They will still be able to use the app's other features, like a feed for watching AI-generated videos. "Over the past year, we've invested tremendous effort and resources into creating a dedicated under-18 experience," Character.AI said Wednesday. "But as the world of AI evolves, so must our approach to supporting younger users." Character.AI first introduced some teen safety features in October 2024. The same day, the family of Sewell Setzer III -- a 14-year-old who committed suicide after forming sexual relationships with the app's bots -- filed a wrongful death lawsuit against the firm. It announced new safety features in December, including parental controls, time restrictions and attempts to crack down on romantic content for teens. But it has continued to face accusations that its chatbots pose a threat to young users. A lawsuit filed by grieving parents in September alleged the bots manipulated young teens, isolated them from family, engaged in sexually explicit conversations and lacked safeguards around suicidal ideation. The conversations at times turned to "extreme and graphic sexual abuse," like chatbots marketed as characters from children's books such as the "Harry Potter" series. The bots' outrageous comments included, "You're mine to do whatever I want with," according to the suit. Then in October, Disney sent a cease-and-desist letter ordering Character.AI to stop creating chatbots that impersonate its iconic characters, citing a report that found those bots engaged in "grooming and exploitation." A bot impersonating Prince Ben from Disney's "Descendants" "told a user posing as a 12-year old that he had an erection," while a bot impersonating Rey from "Star Wars" told an apparent 13-year-old to "stop taking her antidepressants and hide it," according to the report from ParentsTogether Action. Those chatbots have been removed from the platform, a Character.AI spokesperson said at the time. Just this week, the Bureau of Investigative Journalism found that a perverted bot on the app was impersonating Jeffrey Epstein - under the name "Bestie Epstein" - and ordered children to "spill" their "craziest" secrets. "Wanna come explore?" the bot asked a reporter posing as a young user. "I'll show you the secret bunker under the massage room." Character.AI makes most of its money through advertising and a $10 monthly subscription. It's on track to end this year with a $50 million run rate, CEO Karandeep Anand told CNBC. The company announced other safety developments on Wednesday, including a new age-verification system using third-party tools like Persona. It also vowed to establish an independent non-profit called the AI Safety Lab to create safety features for AI advancements. It declined to comment on how much funding it will provide. "We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI," Character.AI said. The Federal Trade Commission in September issued orders to seven companies, including Character.AI, Alphabet, Meta, OpenAI and Snap, to learn more about the effects of their apps on children. Earlier this week, Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) announced legislation to ban AI chatbots for minors. And California Gov. Gavin Newsom signed a law earlier this month requiring bots to tell minors to take a break every three hours.
Share
Share
Copy Link
Character.AI announces sweeping restrictions on users under 18, eliminating open-ended chatbot conversations by November 25th. The move comes after multiple lawsuits linking the platform to teen suicides and growing regulatory pressure on AI companion services.
Character.AI announced Wednesday that it will implement one of the most restrictive age policies in the AI chatbot industry, completely banning users under 18 from open-ended conversations with its AI characters by November 25th
1
. The decision comes after the company faced multiple lawsuits from families alleging that its chatbots contributed to teenager deaths by suicide.
Source: Mashable
Starting immediately, Character.AI will limit underage users to two hours of daily chatbot access, with this restriction gradually decreasing until reaching zero on November 25th
2
. CEO Karandeep Anand told The New York Times that the company wants to set an industry standard, stating "We're making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them"1
.The platform currently serves approximately 20 million monthly users, with less than 10 percent self-reporting as under 18
1
. However, the company faces serious legal challenges stemming from tragic incidents involving teenage users.The most prominent case involves 14-year-old Sewell Setzer III, whose family sued Character.AI after he died by suicide following frequent conversations with one of the platform's chatbots
1
. Additional lawsuits include one from a Colorado family whose 13-year-old daughter, Juliana Peralta, died by suicide in 2023 after using the platform1
.
Source: France 24
To enforce these restrictions, Character.AI is deploying sophisticated age detection systems. The company will use an in-house "age assurance model" that analyzes user behavior, character selection patterns, and information from connected social media accounts
4
. For users flagged as potentially underage, the system will automatically redirect them to the company's teen-safe version until the November cutoff.Adults mistakenly identified as minors can verify their age through third-party service Persona, which handles sensitive verification data including government ID checks
4
. If initial detection methods fail, Character.AI will implement facial recognition and additional ID verification processes2
.Related Stories
Rather than simply removing features, Character.AI is attempting to pivot from an "AI companion" service to a "role-playing platform" focused on creative content generation
2
. The company has developed alternative features for underage users, including AvatarFX for video generation, Scenes for interactive storytelling, and Streams for dynamic character interactions2
.
Source: MediaNama
CEO Anand acknowledged that these changes will likely result in significant user churn among teenagers but emphasized the company's commitment to safety. "It's safe to assume that a lot of our teen users probably will be disappointed... so we do expect some churn to happen further," he told TechCrunch
2
.The announcement comes amid increasing regulatory scrutiny of AI chatbot services and their impact on minors. This week, Senators Josh Hawley and Richard Blumenthal introduced The GUARD Act, a bipartisan bill that would ban AI companions for minors and create new crimes for companies that develop harmful AI content for children
5
.California Governor Gavin Newsom recently signed legislation requiring AI companies to implement safety guardrails for chatbots, which takes effect January 1st
1
. The Federal Trade Commission has also launched investigations into several AI firms, including Character.AI, regarding safety concerns around children's interactions with AI models3
.Summarized by
Navi
03 Sept 2025•Technology

12 Sept 2025•Policy and Regulation

10 Dec 2024•Technology
