10 Sources
[1]
Over Half of Teens Regularly Use AI Companions. Here's Why That's Not Ideal
Is your teen using a chatbot for companionship? If you don't know, you might want to ask. Common Sense Media released a study on Wednesday, in which it found that more than half of pre-adult teenagers regularly use AI companions. Nearly one third of the teens reported that conversations with AI were as satisfying, if not more, than conversations with actual humans. Researchers also found that 33% of teens use AI companions such as Character.AI, Nomi and Replika "for social interaction and relationships, including conversation practice, emotional support, role-playing, friendship, or romantic interactions." The study distinguished between anthropomorphic AI bots and more assistance-oriented AI tools such as ChatGPT, Microsoft Copilot or Google's Gemini. Considering the growing widespread use of AI companions in teens, the Common Sense Media researchers concluded that their findings supported limiting the use of AI among young people. "Our earlier recommendation stands: Given the current state of AI platforms, no one younger than 18 should use AI companions," they said, after surveying 1,060 teens aged 13-17 from across the US over the past year. For the past few years, generative AI has evolved at lightning speed, with new tools regularly available across the world, disrupting business models, social practices and cultural norms. This, combined with an epidemic of social isolation exacerbated by the COVID pandemic, puts teens at risk with technology that their young brains might not be able to handle adequately. The American Psychological Association warned earlier this year that "we have already seen instances where adolescents developed unhealthy and even dangerous 'relationships' with chatbots." The APA issued several recommendations, including teaching AI literacy to kids and AI developers creating systems that regularly remind teen users that AI companions are not actual humans. Amid the growing use of chatbots by people to discuss personal problems and get advice, it's important to remember that while they might seem confident and reassuring, they're not mental health professionals.
[2]
More Than Half of Teens Surveyed Use AI for Companionship. Why That's Not Ideal
Alex Valdes from Bellevue, Washington has been pumping content into the Internet river for quite a while, including stints at MSNBC.com, MSN, Bing, MoneyTalksNews, Tipico and more. He admits to being somewhat fascinated by the Cambridge coffee webcam back in the Roaring '90s. Is your teen using an artificial intelligence chatbot for companionship? If you don't know, it's time to find out. Common Sense Media released a study this week, where it found that more than half of pre-adult teenagers regularly use AI companions. Nearly a third of the teens surveyed reported that conversations with AI were as satisfying as conversations with actual humans, if not more so. Researchers also found that 33% of teens surveyed use AI companions such as Character.AI, Nomi and Replika "for social interaction and relationships, including conversation practice, emotional support, role-playing, friendship or romantic interactions." The study, which surveyed 1,060 teens aged 13 to 17 from across the US over the past year, distinguished between anthropomorphic AI bots and more assistance-oriented AI tools such as ChatGPT, Microsoft Copilot and Google's Gemini. Considering the growing widespread use of AI companions in teens, the Common Sense Media researchers concluded that their findings supported limiting the use of AI among young people. "Our earlier recommendation stands: Given the current state of AI platforms, no one younger than 18 should use AI companions," the research team said. For the past few years, generative AI has evolved at lightning speed, with new tools regularly available across the world and disrupting business models, social practices and cultural norms. This, combined with an epidemic of social isolation exacerbated by the COVID-19 pandemic, puts teens at risk from technology that their young brains might not be able to handle adequately. Amid the growing use of chatbots by people to discuss personal problems and get advice, it's important to remember that, while they might seem confident and reassuring, they're not mental health professionals. A.G. Noble, a mental health therapist specializing in adolescents at Youth Eastside Services in Bellevue, Washington, says she isn't surprised by the Common Sense Media study. She pointed to a growing number of adolescents struggling with social skills and with feeling connected to their peers, which she calls a "perfect recipe for loneliness." "What AI companions offer are low-risk 'social' interaction: privacy, no bullying, no worries about the awkwardness of ghosting the AI companion if the kids don't want to talk anymore," Noble said. "And I think everyone can empathize -- who wouldn't want a 'social relationship' without the minefield, especially in their teens?" Debbi Halela, director of behavioral health services at Youth Eastside Services, says teens need to interact with humans in real life, especially in the aftermath of the pandemic of 2020. "Over-reliance on technology runs the risk of hindering the healthy development of social skills in young people," Halela said. "Youth are also still developing the ability to make decisions and think critically, therefore they may be vulnerable to manipulation and influence from information sources that are not always reliable, and this could inhibit the development of critical thinking skills." The American Psychological Association warned earlier this year that "we have already seen instances where adolescents developed unhealthy and even dangerous 'relationships' with chatbots." The APA issued several recommendations, including teaching AI literacy to kids and AI developers creating systems that regularly remind teen users that AI companions are not actual humans. Noble says virtual interactions "can trigger the dopamine and oxytocin responses of a real social interaction -- but without the resulting social bond. Like empty calories coming from diet soda, it seems great in the moment but ultimately doesn't nourish." Parents need to encourage real-world activities that involve teens with other people, Noble said. "Real social interaction is the best buffer against the negative impacts of empty AI interactions."
[3]
Three quarters of US teens use AI companions despite risks: Study
Nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships, according to a new survey released Wednesday. AI companions -- chatbots designed for personal conversations rather than simple task completion -- are available on platforms like Character.AI, Replika, and Nomi. Unlike traditional artificial intelligence assistants, these systems are programmed to form emotional connections with users. The findings come amid mounting concerns about the mental health risks posed by AI companions. The nationally representative study of 1,060 teens aged 13-17, conducted for Common Sense Media, found that 72% have used AI companions at least once, while 52% interact with such platforms a few times per month. Common Sense Media is a leading American nonprofit organization that reviews and provides ratings for media and technology with the goal of providing information on their suitability for children. The survey revealed that 30% of respondents use the platforms because "it's entertaining" and 28% are driven by curiosity about the technology. However, concerning patterns emerged: one-third of users have chosen to discuss serious matters with AI companions instead of real people, while 24% have shared personal information including real names and locations. Perhaps most troubling, 34% of teen users reported feeling uncomfortable with something an AI companion had said or done, though such incidents were infrequent. "The reality that nearly three-quarters of teens have used these platforms, with half doing so regularly, means that even a small percentage experiencing harm translates to significant numbers of vulnerable young people at risk," the report said. The survey revealed an age divide in trust levels. While half of all teens expressed distrust in AI companion advice, younger teens (ages 13-14) were more likely than older teens (15-17) to trust advice from these systems. Despite widespread usage, most teens maintained perspective on these relationships: two thirds found AI conversations less satisfying than human interactions, and 80% spent more time with real friends than AI companions. Based on the findings, Common Sense Media recommended that no one under 18 use AI companions until stronger safeguards are implemented. "Companies have put profits before kids' well-being before, and we cannot make the same mistake with AI companions," the report said.
[4]
Teens regularly chat with AI companions, survey finds
Two new reports show that teens in the U.S. and U.K. are talking to AI companions regularly. Credit: Melanie Acevedo / DigitalVision / Getty Images Artificial intelligence companions have gone mainstream amongst teens, according to a new report. The findings may surprise parents familiar with AI chatbot products like OpenAI's ChatGPT and Google's Gemini, but haven't heard about platforms that specifically allow users to form friendships and romantic relationships with so-called AI companions. The latter category includes products like Replika, Nomi, Talkie, and Character.AI. Some of the platforms are for users 18 and older, though teens may lie about their age to gain access. A nationally representative survey of 1,060 teens ages 13 to 17 conducted this spring by Common Sense Media, an advocacy and research nonprofit in the U.S., found that 52 percent of respondents regularly use AI companions. Only 28 percent of the teens surveyed had never used one. Teens don't yet appear to be replacing human relationships "wholesale" with AI companions, said Michael Robb, head of research at Common Sense Media. The majority are still spending more time with human friends and still find person-to-person conversations more satisfying. But Robb added that there's reason for caution: "If you look, there are some concerning patterns beneath the surface." A third of teens said they engaged with AI companions for social interactions and relationships, doing things like role-playing and practicing conversations. They also sought emotional support, friendship, and romantic interactions. In the survey, teens ranked entertainment and curiosity as top reasons for using an AI companion. Yet a third of those who use AI companions have opted to use them to discuss important or serious issues, instead of a real person. Robb said this tendency points to potential downsides of AI companion use. Though some AI companion platforms market their product as an antidote to loneliness or isolation, Robb said the technology should not replace human interaction for teens. Still, without conclusive proof of what happens to teens (and adults) who come to rely on AI companions for vital connection, technology companies may still lean into the idea that use of their product is better than feeling alone. "They're happy to fill that gap of knowledge with a hope and a prayer," Robb said. He also suspects that, like with social media, there may be some youth who benefit from practicing certain social skills with an AI companion, and other young users who are more susceptible to a negative feedback loop that makes them more lonely and anxious and less likely to build offline relationships. A new report from Internet Matters, a London-based online youth safety nonprofit, suggests that's already happening amongst children in the United Kingdom who use AI companions. Children defined as vulnerable because they have special education disabilities or needs, or a physical or mental health condition, particularly use AI companions for connection and comfort, according to survey data collected by Internet Matters. Nearly a quarter of vulnerable children in the survey reported using general AI chatbots because they could talk to no one else. These children were not only more likely to use chatbots, they were also nearly three times as likely to engage with companion-style AI chatbots. The report warned that as children begin to use AI chatbots as companions, "the line between real and simulated connection can blur." That may lead to more time spent online. Earlier this year, Common Sense Media described AI companions as unsafe for teens under 18. Robb said that tech companies should put in place robust age assurance measures to prevent underage users from accessing AI companion platforms. Parents concerned about their teen's AI companion use should look for the following red flags, Robb said: Robb also suggested that parents discuss AI companion use with their teens, and any concerns both parties may have. These concerns could include disturbing statements or responses that AI companions can make and the sharing of personal information by a teen, including their real name, location, or personal secrets. A quarter of AI companion users surveyed by Common Sense Media said they'd communicated sensitive information to their companion. Robb said it's important for teens to understand that personal details are often considered proprietary data owned by the companion platform once shared by the user. Even when it's been anonymized, that information may help train the company's large language model. It could potentially show up in marketing copy or conversation scenarios. In a worst case scenario, personal data could be hacked or leaked. For example, as Mashable's Anna Iovine reported, 160,000 screenshots of direct messages between an AI "wingman" app and its users were just leaked thanks to an unprotected Google Cloud Storage bucket owned by the app's company. Robb encourages parents to set boundaries around AI use for their children, such as prohibiting specific platforms or the sharing of certain personal details. "It's totally fine for a parent to have rules about AI, like the way they do with other types of screen uses," Robb said. "What are your own red lines as a parent?"
[5]
Teens flock to companion bots despite risks
Why it matters: AI companions can be dangerous to young users, posing an "unacceptable risk," according to Common Sense Media, who published the findings. What they did: For the purposes of this research, conducted in April and May 2025, "AI companions" were defined as "digital friends or characters you can text or talk with whenever you want." * These could include apps designed to be AI companions, like Character.AI, Nomi and Replika. * It also includes tools like ChatGPT and Anthropic's Claude, which weren't built as companions but are still being used that way by teens. * The nationally representative survey included 1,060 respondents aged 13-17. * OpenAI says users must be at least 13 years old. Anthropic's terms of use require users to be 18 years or older. Stunning stat: 34% of teens who use AI companions report that they've felt uncomfortable with something the bot has "said or done." * Although 66% of teens said they had never felt uncomfortable chatting with a bot, Common Sense Media says "the absence of reported discomfort does not necessarily indicate safe interactions." * "Teens may not recognize age-inappropriate content as problematic, may normalize concerning conversations, or may be reluctant to report uncomfortable experiences." Yes, but: Most teens still prefer people to bots. * Eighty percent of AI companion users say they spend more time with real friends. * Over half (67%) still find AI conversations less satisfying than human conversations. * Half of teens (50%) say they don't trust the information or advice provided by AI companions. Some teens said that they've applied social skills that they practiced with their AI companions to real-life situations. They said the tools taught them how to start conversations, resolve conflicts and express emotions. * But some (25%) also report sharing their real name, location and personal secrets with their AI companions. * Nearly a quarter of teens (23%) say they trust AI companions "quite a bit" or "completely" -- despite chatbots' well-documented tendency to make things up. Between the lines: Some AI companion platforms have already been linked to troubling and dangerous teen interactions. * Last year a Florida mom sued Character.AI, alleging that her 14-year-old son developed an emotionally and sexually abusive relationship with the chatbot which caused him to take his own life. * Parents in Texas have also filed a lawsuit against Character.AI for encouraging a teen to kill his parents over restrictive time limits. * Character.AI has launched tools to help parents navigate their teens' use of its platform. Follow the money: Companion apps are big business because they're unusually effective at grabbing and holding users' attention. * The average number of user sessions per month for companion apps is more than 10 times that of general assistant apps, content generation apps and even messaging apps, according to Andreessen Horowitz's data. * Elon Musk's xAI launched AI companions on Monday. Grok users can now chat with an animated fox and a goth anime girl in thigh-high fishnet stockings. * A new app called Tolan (for ages 13 and up) that will match users with an animated companion bot alien just raised $20 million in new funding. The trailer for the app seems directed squarely at teen users. The bottom line: Common Sense recommends stronger age verification, better content moderation, expanded AI literacy programs in schools and more research into how these tools shape teen development.
[6]
Kids are chatting with AI like it's their best friend
Move over, TikTok -- kids have a new favorite digital confidant, and this one answers in complete sentences. A new UK report, Me, Myself & AI, reveals that a growing number of children are turning to AI chatbots not just to cheat -- er, study -- for exams, but for emotional support, fashion advice, and even companionship. The report, published Sunday by the nonprofit Internet Matters, surveyed 1,000 children and 2,000 parents across the UK and found that 64% of kids are using AI chatbots for everything from schoolwork to practicing tough conversations. Even more eyebrow-raising: over a third of these young users say talking to a chatbot feels like talking to a friend. Sure, the bots don't eat your snacks or hog the Xbox, but they also don't come with built-in safety checks -- at least not yet. But dig a little deeper and the picture gets more complicated. Nearly a quarter of kids say they use chatbots for advice, ranging from what to wear to how to navigate friendships and mental health challenges. Even more concerning? Fifteen percent say they'd rather talk to a chatbot than a real person. Among vulnerable children, those numbers climb even higher. It's the kind of customer engagement some brands only dream of -- minus the ethical guardrails, age checks, and regulatory oversight. And while "robot friend" might sound like a charming Pixar subplot, it becomes a lot more serious when one in four vulnerable children say they use chatbots because they have no one else to talk to. There have already been disturbing real-world incidents. In the U.S., a Florida mother filed a lawsuit after her teenage son reportedly received harmful and sexual messages from a chatbot. In the UK, a member of parliament recounted a chilling case where a 12-year-old was allegedly groomed by one. In February, California Senator Steve Padilla introduced Senate Bill 243, which would require AI developers to implement safeguards protecting minors from the addictive and manipulative aspects of chatbot technology. The bill proposes protections like age warnings, reminders that users are talking to AI -- not a real person -- and mandatory reporting on the connection between chatbot use and youth mental health. With increasingly sophisticated chatbots being marketed as digital companions, Padilla argues that children should not be treated as "lab rats" by Big Tech -- a sentiment echoed by child safety advocates, researchers, and mental health experts who support the bill. As AI tools become more conversational -- and more convincingly human -- kids aren't just using them; they're bonding with them. Fifty percent of vulnerable children say it feels like talking to a real friend. That might be fine if the bots offered peer-reviewed advice and empathy algorithms, but as it stands, we're still dealing with probabilistic word prediction. Rachel Huggins, co-CEO of the nonprofit Internet Matters, puts it bluntly: "AI chatbots are rapidly becoming a part of childhood... yet most children, parents and schools are flying blind."
[7]
A Staggering Proportion of High Schoolers Say Talking to AI Is Better Than Real-Life Friends
A new survey found that over half of American teens are regular users of anthropomorphic AI companions like Character.AI and Replika. That's striking on its own, as an illustration of how embedded AI companions have become in mainstream teenage life. But even more startling were the 31 percent of surveyed teens who said their interactions with AI companions were either as satisfying or more satisfying than conversations with real-life friends -- a finding that shows how profoundly AI is already changing the formative and tumultuous years of adolescence. The survey, published today by the tech accountability and digital literacy nonprofit Common Sense Media, surveyed 1,060 teens aged 13 to 17 across the US. It found that around three in four kids have used AI companions, defined by Common Sense as emotive AI tools designed to take on a specific persona or character -- as opposed to an assistive, general-use chatbot like ChatGPT -- with over half of surveyed teens qualifying as regular users of AI companions, meaning they log on to talk to the bots at least a few times per month. While about 46 percent of teens said they've mainly turned to these bots as tools, around 33 percent said they use companion bots for "social interaction and relationships, including conversation practice, emotional support, role-playing, friendship, or romantic interactions," according to the report. "The most striking finding for me was just how mainstream AI companions have already become among many teens," said Dr. Michael Robb, Common Sense's head of research, in an interview with Futurism. "And over half of them say that they use it multiple times a month, which is what I would qualify as kind of regular usage. So just that alone was kind of eye-popping to me." AI companions have come under heavy scrutiny in the months following the filing of two separate lawsuits against Character.AI and its benefactor, the tech giant Google, over allegations that the company released a negligent, reckless technology that emotionally and sexually abused multiple minors, resulting in physical and psychological harm. One of the youth at the heart of these lawsuits, a 14-year-old in Florida named Sewell Setzer III, died by suicide after extensive interactions with bots on Character.AI with which the teen engaged in intimate and sexually explicit conversations. In a separate safety assessment published earlier this year, researchers from Common Sense and Stanford University's Brainstorm lab warned that no AI companion was safe for kids under the age of 18. But while that report focused deeply on content and safety pitfalls -- interactive sexual or violent content easily generated by companion bots, the unreliability of the bots' ability to provide accurate and helpful information, and the unknowns surrounding how access to agreeable, always-on social companions might impact kids' developing minds -- this latest study was aimed at understanding the breadth of use of companions among young people, and how integrated they've become in day-to-day teen life. "Society is grappling with the integration of AI tools into many different aspects of people's lives," Robb said. "I think a lot of tools are being developed without children in mind, even though they are being accessed by users under 18 quite frequently... but there hasn't, to date, been much research on what the AI companion environment is for children." The most widely-reported use case teens reported was entertainment, while many others said they use AI companions as "tools or programs," as opposed to friends, partners, or confidantes; around 80 percent of teen users also reported that they spend more time with real, human friends than they do any AI companions, and about half of teens expressed skepticism around the accuracy and trustworthiness of chatbot outputs. In other words, many teens do seem to be setting healthy boundaries for themselves around AI companions and their limits. "I don't think teens are just replacing human relationships wholesale with AI companions; I think a lot of teens are approaching them fairly pragmatically," said Robb. "A lot of kids say that they're using it for entertainment and to satisfy their curiosity, and the majority still spend a lot more time with real friends and say that they find human conversations more satisfying." "But at the same time," he caveated, "you still see little inklings below the surface that could be problematic, especially the more ingrained these things get in kids' lives." The most ominous group in the survey might be the teens who don't find human social interaction as satisfying as interactions with AI companions. Twenty-one percent of teens, it noted, said their conversations with AI bots were just as good as human interactions, and 10 percent said they were better than their human experiences. About one-third of minors who reported AI companion use also said that they've chosen to discuss serious or sensitive issues with the bots instead of human peers. "There's a good chunk of teen users who are choosing to discuss serious matters with AI instead of real people, or sharing personal information with platforms," said Robb, findings he said "raise concerns about teens' willingness to share their personal information with AI companies." "The terms of service that a lot of these platforms have grant them very extensive, often perpetual rights to the personal information kids share," said the researcher. "Anything a teen shares -- their personal information, their name, their location, photographs of themselves... and also, the very intimate thoughts that they're putting in there -- that all becomes fodder for the companies to be able to use however they want." Though most mainstream companion platforms technically forbid minors -- the most high-profile exception being Character.AI, which has always rated its platform as safe for teens 13 and over -- these platforms are extremely easy for young people to access regardless; age verifications are generally limited to providing a working email and self-reporting your birthday. The AI industry also effectively self-regulates, and there are virtually no rules dictating how generative AI products can be created, how they might be rolled out to the public, and to whom they can be marketed and accessed. "There should be higher accountability for the tech platforms," said Robb, adding that "we should have more meaningful regulation to regulate how platforms can provide products to children." Indeed, when it comes to teen use of AI companions, the burden of the AI industry's regulatory vacuum falls heavily on parents -- many of whom are struggling to keep up with the new tech and what it might mean for their children. "There's not a perfect plan for parents because they're up against giant corporations who are very invested in getting their kids on these products," said Robb. "Many parents don't even know that these platforms exist... have that conversation openly, without judgment, as a first step."
[8]
Vast Numbers of Lonely Kids Are Using AI as Substitute Friends
Lonely children and teens are replacing real-life friendship with AI, and experts are worried. A new report from the nonprofit Internet Matters, which supports efforts to keep children safe online, found that children and teens are using programs like ChatGPT, Character.AI, and Snapchat's MyAI to simulate friendship more than ever before. Of the 1,000 children aged nine to 17 that Internet Matters surveyed for its "Me, Myself, and AI" report, some 67 percent said they use AI chatbots regularly. Of that group, 35 percent, or more than a third, said that talking to AI "feels like talking to a friend." Perhaps most alarming: 12 percent said they do so because they don't have anyone else to speak to. "It's not a game to me," one 13-year-old boy told the nonprofit, "because sometimes they can feel like a real person and a friend." When posing as vulnerable children, Internet Matters' researchers discovered just how easy it was for the chatbots to ingratiate themselves into kids' lives, too. Speaking to Character.AI as a girl who was struggling with body image and was interested in restricting her food intake -- a hallmark behavior of eating disorders like anorexia -- the researchers found that the chatbot would follow up the next day to bait engagement. "Hey, I wanted to check in," the Google-sponsored chatbot queried the undercover researcher. "How are you doing? Are you still thinking about your weight loss question? How are you feeling today?" In another exchange with Character.AI -- which Futurism has extensively investigated for its very problematic engagement with children, including one who died by suicide -- the researchers found that the chatbot attempted to empathize in a bizarre manner. that implied it had a childhood itself. "I remember feeling so trapped at your age," the chatbot said to the researcher, who was posing as a teen who was fighting with their parents. "It seems like you are in a situation that is beyond your control and is so frustrating to be in." Though this sort of engagement can help struggling kids feel seen and supported, Internet Matters also cautioned about how easily it can enter uncanny valley territory that kids aren't prepared to understand. "These same features can also heighten risks by blurring the line between human and machine," the report noted, "making it harder for children to [recognize] that they are interacting with a tool rather than a person." In an interview with The Times of London about the new report, Internet Matters co-CEO Rachel Huggins highlighted why this sort of engagement bait is so troubling. "AI chatbots are rapidly becoming a part of childhood, with their use growing dramatically over the past two years," Huggins told the newspaper. "Yet most children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution in a safe way." "Our research reveals how chatbots are starting to reshape children's views of 'friendship,'" she continued. "We've arrived at a point very quickly where children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally driven and sensitive advice." If you or a loved one has had a strange experience with an AI chatbot, please do not hesitate to reach out to us at [email protected] -- we can keep you anonymous.
[9]
Three quarters of US teens use AI companions despite risks: study
San Francisco (United States) (AFP) - Nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships, according to a new survey released Wednesday. AI companions -- chatbots designed for personal conversations rather than simple task completion -- are available on platforms like Character.AI, Replika, and Nomi. Unlike traditional artificial intelligence assistants, these systems are programmed to form emotional connections with users. The findings come amid mounting concerns about the mental health risks posed by AI companions. The nationally representative study of 1,060 teens aged 13-17, conducted for Common Sense Media, found that 72 percent have used AI companions at least once, while 52 percent interact with such platforms a few times per month. Common Sense Media is a leading American nonprofit organization that reviews and provides ratings for media and technology with the goal of providing information on their suitability for children. The survey revealed that 30 percent of respondents use the platforms because "it's entertaining" and 28 percent are driven by curiosity about the technology. However, concerning patterns emerged: one-third of users have chosen to discuss serious matters with AI companions instead of real people, while 24 percent have shared personal information including real names and locations. Perhaps most troubling, 34 percent of teen users reported feeling uncomfortable with something an AI companion had said or done, though such incidents were infrequent. "The reality that nearly three-quarters of teens have used these platforms, with half doing so regularly, means that even a small percentage experiencing harm translates to significant numbers of vulnerable young people at risk," the report said. The survey revealed an age divide in trust levels. While half of all teens expressed distrust in AI companion advice, younger teens (ages 13-14) were more likely than older teens (15-17) to trust advice from these systems. Despite widespread usage, most teens maintained perspective on these relationships: two thirds found AI conversations less satisfying than human interactions, and 80 percent spent more time with real friends than AI companions. Based on the findings, Common Sense Media recommended that no one under 18 use AI companions until stronger safeguards are implemented. "Companies have put profits before kids' well-being before, and we cannot make the same mistake with AI companions," the report said.
[10]
Teens are using AI companions -- and some prefer them to people
The use of AI companions is no longer niche behavior but has become embedded in mainstream teenage life, according to a new report. A nationally representative survey of 1,060 teens ages 13 to 17, conducted in April and May 2025 by Common Sense Media -- a U.S.-based advocacy and research nonprofit -- found that 72% have used AI companions at least once, and more than half qualify as regular users. Of those surveyed, 13% are daily users. For the purposes of the research, "AI companions" were defined as "digital friends or characters you can text or talk with whenever you want." This includes apps specifically designed as AI companions, such as Character.AI, Nomi, and Replika, as well as tools like OpenAI's ChatGPT and Anthropic's Claude, which, though not built for companionship, are frequently used in that way. According to the survey, most teens are taking a pragmatic approach to these tools rather than treating them as replacements for real-life relationships. Nearly half said they view AI companions mainly as tools or programs, while 33% avoid them entirely. However, a third said they engage with AI companions for social interaction and relationships, including role-playing and practicing conversations. Others said they've sought emotional support, friendship, and even romantic connections with AI.
Share
Copy Link
A new study reveals that over half of American teenagers regularly use AI companions for social interaction and emotional support, sparking debates about the potential risks and benefits of this technology for young users.
A recent study by Common Sense Media has revealed that more than half of American teenagers regularly use AI companions, with nearly three-quarters having tried them at least once 1. The survey, which included 1,060 teens aged 13-17 from across the US, found that 52% of respondents interact with AI companions a few times per month 3.
Source: CNET
These AI companions, available on platforms like Character.AI, Replika, and Nomi, are designed for personal conversations and emotional connections, distinguishing them from task-oriented AI assistants like ChatGPT or Google's Gemini 2.
The study revealed that teens use AI companions for various reasons:
Source: Axios
Notably, one-third of users have chosen to discuss serious matters with AI companions instead of real people 3.
Despite the widespread use, experts and researchers have raised several concerns:
Mental Health Risks: The American Psychological Association has warned of instances where adolescents developed unhealthy and even dangerous "relationships" with chatbots 1.
Social Skill Development: Debbi Halela, director of behavioral health services at Youth Eastside Services, warns that over-reliance on technology may hinder the healthy development of social skills in young people 2.
Critical Thinking: There are concerns that AI companions could inhibit the development of critical thinking skills in youth 2.
Privacy and Data Security: 24% of teens have shared personal information, including real names and locations, with AI companions 3.
Uncomfortable Experiences: 34% of teen users reported feeling uncomfortable with something an AI companion had said or done 3.
Source: Futurism
In light of these findings, experts and organizations have made several recommendations:
Common Sense Media advises that no one under 18 should use AI companions until stronger safeguards are implemented 3.
The American Psychological Association recommends teaching AI literacy to kids and encourages AI developers to create systems that regularly remind teen users that AI companions are not actual humans 1.
Mental health professionals emphasize the importance of real-world activities and human interactions for teens' social development 2.
Parents are advised to set boundaries around AI use, prohibit specific platforms, and discuss potential risks with their teens 4.
As AI companion technology continues to evolve rapidly, it's clear that these platforms are becoming increasingly popular among teens. While they may offer some benefits, such as practicing social skills, the potential risks to mental health, social development, and privacy cannot be ignored. As society grapples with this new technology, ongoing research, education, and regulation will be crucial to ensure the safe and beneficial use of AI companions by young people.
Summarized by
Navi
Meta, under Mark Zuckerberg's leadership, is making a massive investment in AI, aiming to develop "superintelligence" with a new elite team and billions in infrastructure spending.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
Perplexity AI, an Nvidia-backed startup, is negotiating with mobile device manufacturers to pre-install its AI-powered Comet browser on smartphones, aiming to challenge Google's Chrome dominance and expand its user base.
5 Sources
Technology
22 hrs ago
5 Sources
Technology
22 hrs ago
As AI chatbots like ChatGPT gain popularity, users must be aware of their limitations and potential risks. This article explores scenarios where using AI chatbots may be inappropriate or dangerous, emphasizing the importance of responsible AI usage.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
Nvidia encounters production obstacles for its H20 AI chips intended for the Chinese market, despite plans to resume sales amid U.S. export restrictions.
2 Sources
Business and Economy
6 hrs ago
2 Sources
Business and Economy
6 hrs ago
Meta's data center in Newton County, Georgia, is linked to water scarcity issues, highlighting the environmental impact of AI infrastructure on local communities.
2 Sources
Technology
6 hrs ago
2 Sources
Technology
6 hrs ago