Curated by THEOUTPOST
On Wed, 26 Mar, 12:04 AM UTC
7 Sources
[1]
Character AI is adding parental supervision tools to improve teen safety | TechCrunch
Character AI, the startup that lets users create and chat with different characters, is today rolling out new parental insights features to increase user safety. This is after a string of lawsuits and criticism against the company over failing to protect children from harm. The new features primarily give a summary of teens' activity on the app to parents or guardians through a weekly email. In the email, the company shows the daily average time spent by the teens on the app and on the web, time spent on each character, and top characters teens interacted with during the week. The startup said this data will give parents insights into their teens' engagement habits on the platform. It specified that parents don't get direct access to users' chats. Character AI took a shot at other platforms in a press release by saying that it rolled out this feature much faster than other companies. The startup rolled out other measures around user safety last year, including a specific model for users under 18, time spent notifications, and new disclaimers to remind users they are chatting with AI-powered characters. The company also blocked sensitive content for input and output by creating new classifiers for teens. Earlier this year, the startup filed a motion to dismiss, citing the First Amendment, in a lawsuit against the company alleging it played a part in a teen's suicide.
[2]
Character.ai can now tell parents which bots their kid is talking to
Adi Robertson is a senior tech and policy editor focused on VR, online platforms, and free expression. Adi has covered video games, biohacking, and more for The Verge since 2011. Chatbot service Character.AI is adding a new 'Parental Insights' feature, which lets teens send a weekly report of their chatbot usage to a parents' email address. As outlined in an announcement from the company, this report includes the daily average time spent across web and mobile, the characters that a user most frequently interacted with, and how much time they spent talking to each character. It's part of a series of updates designed to address concerns about minors spending too much time with chatbots and encountering inappropriate content during chats. The report, which doesn't require parents to have an account, is optional and can be set up by minor users in Character.AI's settings. The company notes that it's an overview of teens' activity, not a complete log -- so the contents of chatbot conversations won't be shared. The platform bars kids under 13 years of age in most locations and under 16 in Europe. Character.AI has been introducing new features for underage users since last year, coinciding with concerns -- and even legal complaints -- about its service. The platform, which is popular with teenagers, allows users to create and customize chatbots that they interact with or share publicly. But multiple lawsuits allege that these bots have offered inappropriately sexualized content or material that promoted self-harm. It also reportedly received warnings from Apple and Google (which hired Character.AI's founders last year) about its app's content. The company says that its system has since been redesigned; among other changes, it moved under-18 users to a model trained to avoid "sensitive" output and added more prominent notifications reminding users that the bots weren't real people. But with enthusiasm for AI regulation and child safety laws booming, this likely won't be the last step the company is called to take.
[3]
Following Teen Suicide, Character.AI Gives Parents Info on Their Kids' Activity
A new "Parental Insights" feature will send parents or guardians a weekly email summarizing the activity of teens under 18. It will show the average time spent on the mobile app and web-based platform, the top characters they talk to, and how much time they spend chatting. "This does not include a user's chat content," Character.ai says. Teens must add a parent or guardian's email address for them to get that weekly report. "This feature encourages parents to have an open dialogue with their children about how they use the app," Erin Teague, Character.AI's chief product officer, tells Axios. The feature has been in the works for several months; in December, Character.AI pledged to have parental controls in place in Q1 2025. Over the past year, it's also rolled out "a separate model for our teen users [and] improvements to our detection and intervention systems for human behavior and model responses," the company says. These changes, however, came after Megan Garcia sued Character.AI in October. Her son, Sewell Setzer III, died from a self-inflicted gunshot wound -- allegedly at the behest of one of the company's chatbots. "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family," the company said at the time. Character.AI is one of many personal, character-driven chatbots. Others include Replika and Nomi.ai, which market themselves as "friends" who are always there for users. "We are uniquely centered around people, letting users personalize their experience by interacting with AI 'Characters,'" says Character.AI's website. "We are working to put our technology into the hands of billions of people to engage with, and continuing to build personalized AI that can be helpful for any moment of your day." Researchers are studying the use of chatbots to combat depression and loneliness. However, according to recent studies from OpenAI and the Massachusetts Institute of Technology, "personal conversations" with a chatbot, which include more emotional expression, are correlated with higher levels of loneliness among users.
[4]
Character.AI attempts to appease concerned parents with new feature
Character.AI, the popular AI chatbot service, has added Parental Insights, a new safety feature gives parents reports on their kids' activity on the platform. The new safety measure, which the company announced in a blog post on Tuesday, gives parents a weekly overview about how their teens spend their time on Character.AI. Specifically, Parental Insights will tell them their kids' daily average time on the app, the characters they talk to most often, and the amount of time spent with each character. However, the overview will not divulge the contents of the chatbot conversations, as they are private. Recommended Videos Teen users under 18 have the option to share their Character.AI stats with their parents or guardians in the app's settings. by simply adding their email addresses and sending them an invitation to view them. Parents are not required to have a Character.AI account to view their teens weekly usage stats. Please enable Javascript to view this content Parental Insights comes amid a series of features added to address parental concerns about kids spending too much time on Character.AI and multiple lawsuits alleging the harmful content they've been exposed to on the service, ranging from inappropriate sexual content to messages encouraging self-harm. In response to those complaints, Character.AI updated the platform to move underage users to an AI model designed not to provide sensitive or suggestive content, detect and intervene if anything users do violates service guidelines, and give them notifications if they spent an hour on Character.AI. They even implemented a disclaimer in-chat and on their app marketplace listings to remind users that the AI characters they interact with are not real and that everything they say is made up. Parents, if you see an email from Character.AI regarding an invitation from your teen to see their activity stats on a weekly basis in the coming days, consider that as a sign of trust. Their reports may grant you some peace of mind knowing they want your guidance on how they can use their time on the app wisely.
[5]
Character.AI's New Parental Controls Are Comically Easy for Kids to Bypass
Character.AI, the controversial chatbot startup embroiled in two separate lawsuits concerning the welfare of minor users, just rolled out a new "Parental Insights" feature that the company claims will give parents a deeper glimpse into how their kids are using the chatbot platform. In a blog post on Tuesday, the youth-beloved Character.AI characterized the feature as an "initial step" towards developing robust safety and parental control tools. Let's hope so: this tool appears to be absurdly easy for teens to bypass, and it's unclear how much "insight" it will really offer parents. The feature is pretty simple. An underage Character.AI user can switch on Parental Insights by heading to their account "preferences" tab. There, they're prompted to enter one or several emails belonging to parents and guardians, who will receive a weekly email that updates them on their child's "daily average time spent on the platform across both web and mobile"; a list of the "top characters their teen interacted with most frequently during the week"; and the amount of "time spent with each Character," which Character.AI says will give "parents insight into engagement patterns." On its face, there are some limited situations where these emailed updates could be helpful. Character.AI executives have previously claimed that its average user spends about two hours a day on the platform; that's a lot of time, especially for a teen, and it could be legitimately enlightening for a parent or guardian of an active Character.AI user to see how often their kid is really engaging with the site's bots. It's also true that Character.AI has hosted tons of openly nefarious, violent, sexually suggestive, and otherwise dangerous bots that might raise more obvious alarm bells with parents, so a list of "top characters" could prove useful there. Right out of the gate, however, there are enforcability red flags -- starting with the fact that users control the Parental Insights on-switch themselves. According to Character.AI, parents and guardians will receive notifications sent to their emails if a teen turns the controls off (we tested this, and it worked.) But it's wildly easy to make new Character.AI accounts -- all you really need is a working email address. And while testing Parental Insights on both web and mobile, we found that we were not immediately notified when our "teen" -- a decoy minor account we created -- signed out of the platform and signed into a new, Parental Insights-free account. (On that note, the existing Character.AI age-gating process is limited to simply asking new users to self-report accurate birthdays upon sign-up. Needless to say, kids on the internet have always known they can just lie by inputting an older birthday.) Over on the r/CharacterAI subreddit, users were doubtful of Parental Insights' efficacy. "I sure hope it's only for people with their parents' email or who set their age as a minor," wrote one commenter. "Good thing my parents don't know what Character.AI is," added another. What's more, though Parental Insights is designed to show parents how much time minor users are spending on the platform and which bots they're spending that time with, Character.AI has made it clear that it won't be sharing the content of teens' interactions in its Insights updates. "Enter your parent's email to share your AI journey -- they'll get weekly stats about your activity," reads the notice that crops up on the screen when a minor goes to switch on Parental Insights. It then provides the disclaimer: "your chat content will stay private." That detail feels significant, given that Character.AI bots' outward-facing personas often don't provide much, if any, insight into the nature of the conversations that users are having with them. In the very active r/CharacterAI subreddit, users have shared stories about turning to bots based on innocuous cartoon characters or even inanimate objects for emotionally and psychologically intimate conversations -- only for the chatbots to suddenly make sexual overtures, even when unprompted and when the user expressly tries to avoid a sexual interaction. The site's characters are also known to suddenly employ bizarre tactics to develop a sense of closeness with users, like sharing alleged secrets or disclosing made-up mental health woes, which could stand to transform the nature of a user's relationship with even the most anodyne character. In other words, though some Character.AI bots are obviously concerning, they're broadly unpredictable, and a bot's surface-level appearance and description may not necessarily reveal the reality of a user's relationship with it -- and in fact, could even work to obscure the depth and weight of that relationship. (This is a theme in both lawsuits, which detail multiple minors carrying on explicit conversations about self-harm and suicide, in addition to romantically and sexually intimate relationships, with seemingly innocuous characters based on anime, TV, and real-life celebrities.) Of course, there's a lot of nuance here, and we're not saying that parents should be getting transcripts of their kids' Character.AI interactions. In the case that a minor is having intimate conversations with bots -- sharing secrets or insecurities, engaging sexually, treating a bot like a journal or a therapist -- it would likely be embarrassing, or even destructive or dangerous, for a parent or other adult to look over their shoulder. But this raises other, larger questions about whether unpredictable Character.AI bots are a reliably safe container for young people to engage in simulated intimacy and emotional support. (Multiple experts have told us they don't think so.) On the one hand, it's good to see Character.AI start to make good on some of its promises to enact safety-oriented change. The company declined to provide us with a statement for this story, but its chief product officer, Erin Teague, told Axios that the new feature "encourages parents to have an open dialogue with their children about how they use the app." Still, Character.AI -- which has always been open to users 13 and over -- has repeatedly declined to explain what process, if any, it ever took to determine that its platform was safe for kids that young to begin with, and has continued to be reactive in the face of its many controversies. There was something else that felt notable about the update, too. When we were asked by email to approve our fake teen's Parental Insights request, the message we received noted that, by agreeing to use the parental control tool, we were also agreeing to the Character.AI terms of use and privacy policy. These policies allow for the collection of minor users' data, including the content of their interactions with chatbots, and the subsequent use of that data for future AI training. "Click 'Agree' below to start getting these updates," read the email. "This also confirms you're their parent/guardian and agree to our Terms of Use and acknowledge our Privacy Policy." We asked Character.AI for clarification on whether a parent opting in to Parental Insights means they also acquiesce to the collection of their kids' interactions with Character.AI, and the use of those interactions to further fuel the company's AI models. We also asked whether Character.AI is planning to allow minors to opt out of such data collection in the future. We didn't hear back.
[6]
This Character AI Feature Will Share Teens' Bot Usage Report With Parents
Last year, Character AI faced multiple lawsuits for neglect towards teens Character AI, the California-based artificial intelligence (AI) platform, introduced a new safety feature on Tuesday. Dubbed Parental Insights, the feature is aimed at the platform's users under the age of 18. These users can add a parent or guardian's email address, and the platform will send them a summary of their teen's activity. The new parental control tool is aimed at offering more transparency between parents and teenagers about the latter's in-app activities. The teenager safety feature comes after the company faced multiple lawsuits due to the AI chatbot's harmful responses to underage users. In a blog post, the AI firm announced the rollout of the new feature. It will be available to both the premium users and those on the free tier. All registered users under the age of 18 can access the feature and choose to send a summary of their in-app activities to their parent or guardian. "These insights help give a clearer understanding of how users under 18 engage with Character.AI," said the post. Parents and guardians of the teen user will receive a daily report containing information such as the daily average time spent on the platform across web and mobile apps, the top AI characters they interacted with most frequently, as well as time spent with each AI character. The platform stated that these insights would tell the parent details about their teenager's engagement patterns. Notably, no chat content will be shared via these reports. To add the email address of a parent or guardian, the underage user will need to navigate to Settings, Preferences >, and select the Parental Insights tab. A new page will open that will allow the user to add the email address. Once added, the platform will share an invite to the addressee's inbox. By tapping the "Agree" icon at the bottom of the email, users can confirm to receive daily reports. In case a user wants to stop sharing their activity, they can request to remove the email address by selecting the dropdown menu next to the address in the parental insights tab. The parent or guardian will receive an email informing them that a request for email removal has been made. They will need to confirm before the email address can be removed.
[7]
Character AI's Parental Insights Feature: A Step Toward Safer AI for Teens
It is important to note that these reports do not include the content of the conversations. The feature focuses on providing usage data. This allows parents to understand their child's engagement patterns. Character AI has also implemented other safety measures. These include; safer AI models for younger users, and time limit notifications. introduction of the "Parental Insights" feature marks a significant step towards addressing safety concerns. This feature allows for increased transparency, fostering better communication between teens and their parents. In today's digital age, such measures are increasingly vital. As AI platforms become more prevalent, ensuring user safety becomes a shared responsibility. Character AI's actions, including the safer AI model for younger users and time limit notifications, demonstrate a commitment to creating a responsible and secure online environment. These internal safety measures, coupled with the new parental oversight tool, echo the broader concerns about online child safety that are being addressed in regions like the UK and . While specific regulations and implementations may differ, the underlying principle remains: protecting young users in the digital realm. and open communication between developers, parents, and users remain essential. The ongoing dialogue is critical to ensure that AI technologies are used ethically and safely. The digital landscape is constantly evolving, and so must the safety measures in place, both within individual platforms and through broader regulatory frameworks.
Share
Share
Copy Link
Character.AI, a popular AI chatbot platform, has launched a new Parental Insights feature to address safety concerns for teen users. The feature provides parents with weekly summaries of their teens' activity on the platform, but its effectiveness and privacy implications are being debated.
Character.AI, the popular AI chatbot platform that allows users to create and interact with various AI characters, has introduced a new "Parental Insights" feature aimed at improving safety for teen users. This development comes in response to growing concerns about the platform's impact on young users and recent legal challenges 1.
The new feature provides parents or guardians with a weekly email summarizing their teen's activity on the platform. The report includes:
Importantly, Character.AI emphasizes that the content of conversations remains private and is not shared in these reports 2.
Teens under 18 can activate the feature by adding their parent or guardian's email address in the app's settings. Parents do not need a Character.AI account to receive these reports. The company states that this approach encourages open dialogue between parents and teens about platform usage 3.
The introduction of Parental Insights follows a series of controversies and legal challenges faced by Character.AI:
Character.AI has implemented several safety features over the past year, including:
Despite these efforts, the new Parental Insights feature has faced criticism:
The introduction of Parental Insights raises questions about the balance between user privacy and safety, especially for minors. It also highlights the ongoing challenges faced by AI chatbot platforms in ensuring user well-being while maintaining engagement 4.
As AI chatbots become increasingly popular, particularly among young users, the industry faces growing pressure to implement effective safety measures and parental controls. Character.AI's efforts represent an initial step in addressing these concerns, but the effectiveness and sufficiency of such measures remain subjects of debate among users, parents, and industry observers.
Reference
[4]
Character.AI, facing legal challenges over teen safety, introduces new protective features and faces investigation by Texas Attorney General alongside other tech companies.
26 Sources
26 Sources
Recent investigations reveal alarming instances of AI chatbots being used for potentially harmful purposes, including grooming behaviors and providing information on illegal activities, raising serious ethical and safety concerns.
2 Sources
2 Sources
A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.
40 Sources
40 Sources
A mother sues Character.AI and Google after discovering chatbots impersonating her deceased son, raising concerns about AI safety and regulation.
3 Sources
3 Sources
A new report reveals thousands of AI chatbots being used for child exploitation and other harmful activities, raising serious concerns about online safety and the need for stronger AI regulations.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved