13 Sources
[1]
72% of U.S. teens have used AI companions, study finds | TechCrunch
A new study by a U.S. nonprofit focused on the lives of kids and families, Common Sense Media, has found that a vast majority of U.S. teens (72%) have tried an AI companion at least once. By "companion," the study is focused on AI chatbots that are designed for users to have more personal conversations with, not AI assistants that work as homework helpers, image generators, or voice assistants that just answer questions. For instance, the study's definition of AI companions could include those digital AI personas provided by companies like Character.AI or Replika, but it could also encompass the use of general-purpose chatbots like ChatGPT or Claude, which can be used for more personal conversations, if desired. The idea of chatting with an AI seems to be appealing to U.S. teens (ages 13 to 17), the study found, as not only had nearly three-quarters tried an AI companion, 52% said they are regular users. Among those who engaged with these companions regularly, 13% chat with them daily and 21% chat a few times a week. Boys (31%) were also slightly more likely than girls (25%) to say they had never used an AI companion, among the 1-in-4 teens who said they have never tried it. The findings are based on a study that ran during April and May 2025, and used a representative sample of 1,060 teens and was conducted by researchers from NORC at the University of Chicago. There have already been concerns about AI's impact on teens' well-being, as one firm, Character.AI, is being sued over a teen's suicide in Florida and for promoting violence in Texas. There are also a number of reports that describe the potential dangers of using AI for therapy. The findings from Common Sense Media's new study offer an early understanding of how young people are using AI to simulate human interactions, which could include virtual friendship, emotional support, therapy, and role-playing games, among other things. The analysis also examined other behaviors around teen usage of AI companions, including what sorts of tasks teens turned to them for, why, and what the after-effects were. For instance, nearly half (46%) said they saw AI companions as tools or programs, and 33% said they use them for social interaction and relationships. Teens said they use the AI companions for various purposes: entertainment (30% said this), curiosity about AI technology (28%), advice (18%), and because they're always available (17%). Half of teens (50%) said they don't trust the information provided by AI companions. However, older teens are less likely to trust the AI's advice compared with younger teens, ages 13-14, at 20% and 27%, respectively. One-third of the teens said they find the conversations more satisfying than those with real-life friends, though the majority (67%) felt the opposite way. Plus, 39% were using the AI conversations as practice for real-life interactions, as 39% said they applied skills they first tried with an AI to real-world situations. Among the skills practiced, social skills were the top use case, with 39% of teens having explored this area, followed by conversation starters (18%), giving advice (14%), and expressing emotions (13%). In terms of whether or not real-life relationships will be replaced by tech, there was one positive finding: 80% of teens who used AI companions said they spend more time with real friends than with their AI chatbots. Only 6% said the reverse was true.
[2]
Over Half of Teens Regularly Use AI Companions. Here's Why That's Not Ideal
Is your teen using a chatbot for companionship? If you don't know, you might want to ask. Common Sense Media released a study on Wednesday, in which it found that more than half of pre-adult teenagers regularly use AI companions. Nearly one third of the teens reported that conversations with AI were as satisfying, if not more, than conversations with actual humans. Researchers also found that 33% of teens use AI companions such as Character.AI, Nomi and Replika "for social interaction and relationships, including conversation practice, emotional support, role-playing, friendship, or romantic interactions." The study distinguished between anthropomorphic AI bots and more assistance-oriented AI tools such as ChatGPT, Microsoft Copilot or Google's Gemini. Considering the growing widespread use of AI companions in teens, the Common Sense Media researchers concluded that their findings supported limiting the use of AI among young people. "Our earlier recommendation stands: Given the current state of AI platforms, no one younger than 18 should use AI companions," they said, after surveying 1,060 teens aged 13-17 from across the US over the past year. For the past few years, generative AI has evolved at lightning speed, with new tools regularly available across the world, disrupting business models, social practices and cultural norms. This, combined with an epidemic of social isolation exacerbated by the COVID pandemic, puts teens at risk with technology that their young brains might not be able to handle adequately. The American Psychological Association warned earlier this year that "we have already seen instances where adolescents developed unhealthy and even dangerous 'relationships' with chatbots." The APA issued several recommendations, including teaching AI literacy to kids and AI developers creating systems that regularly remind teen users that AI companions are not actual humans. Amid the growing use of chatbots by people to discuss personal problems and get advice, it's important to remember that while they might seem confident and reassuring, they're not mental health professionals.
[3]
More Than Half of Teens Surveyed Use AI for Companionship. Why That's Not Ideal
Alex Valdes from Bellevue, Washington has been pumping content into the Internet river for quite a while, including stints at MSNBC.com, MSN, Bing, MoneyTalksNews, Tipico and more. He admits to being somewhat fascinated by the Cambridge coffee webcam back in the Roaring '90s. Is your teen using an artificial intelligence chatbot for companionship? If you don't know, it's time to find out. Common Sense Media released a study this week, where it found that more than half of pre-adult teenagers regularly use AI companions. Nearly a third of the teens surveyed reported that conversations with AI were as satisfying as conversations with actual humans, if not more so. Researchers also found that 33% of teens surveyed use AI companions such as Character.AI, Nomi and Replika "for social interaction and relationships, including conversation practice, emotional support, role-playing, friendship or romantic interactions." The study, which surveyed 1,060 teens aged 13 to 17 from across the US over the past year, distinguished between anthropomorphic AI bots and more assistance-oriented AI tools such as ChatGPT, Microsoft Copilot and Google's Gemini. Considering the growing widespread use of AI companions in teens, the Common Sense Media researchers concluded that their findings supported limiting the use of AI among young people. "Our earlier recommendation stands: Given the current state of AI platforms, no one younger than 18 should use AI companions," the research team said. For the past few years, generative AI has evolved at lightning speed, with new tools regularly available across the world and disrupting business models, social practices and cultural norms. This, combined with an epidemic of social isolation exacerbated by the COVID-19 pandemic, puts teens at risk from technology that their young brains might not be able to handle adequately. Amid the growing use of chatbots by people to discuss personal problems and get advice, it's important to remember that, while they might seem confident and reassuring, they're not mental health professionals. A.G. Noble, a mental health therapist specializing in adolescents at Youth Eastside Services in Bellevue, Washington, says she isn't surprised by the Common Sense Media study. She pointed to a growing number of adolescents struggling with social skills and with feeling connected to their peers, which she calls a "perfect recipe for loneliness." "What AI companions offer are low-risk 'social' interaction: privacy, no bullying, no worries about the awkwardness of ghosting the AI companion if the kids don't want to talk anymore," Noble said. "And I think everyone can empathize -- who wouldn't want a 'social relationship' without the minefield, especially in their teens?" Debbi Halela, director of behavioral health services at Youth Eastside Services, says teens need to interact with humans in real life, especially in the aftermath of the pandemic of 2020. "Over-reliance on technology runs the risk of hindering the healthy development of social skills in young people," Halela said. "Youth are also still developing the ability to make decisions and think critically, therefore they may be vulnerable to manipulation and influence from information sources that are not always reliable, and this could inhibit the development of critical thinking skills." The American Psychological Association warned earlier this year that "we have already seen instances where adolescents developed unhealthy and even dangerous 'relationships' with chatbots." The APA issued several recommendations, including teaching AI literacy to kids and AI developers creating systems that regularly remind teen users that AI companions are not actual humans. Noble says virtual interactions "can trigger the dopamine and oxytocin responses of a real social interaction -- but without the resulting social bond. Like empty calories coming from diet soda, it seems great in the moment but ultimately doesn't nourish." Parents need to encourage real-world activities that involve teens with other people, Noble said. "Real social interaction is the best buffer against the negative impacts of empty AI interactions."
[4]
A Huge Number of Teens Regularly Talk to an AI, But Don't Always Feel Good About It
A whopping 72% of teens have tried talking to an AI companion, and 52% do so regularly, according to a new survey. But don't worry (yet): The overwhelming majority still prefer in-person friendships. Common Sense Media, a nonprofit focused on online safety for families, asked 1,060 "nationally representative" teens how and why they are using AI. The questions focused on AI "companions," defined as "digital friends or characters you can text or talk with whenever you want...designed to have conversations that feel personal and meaningful." These AI systems can be more addictive and influence teens' perspectives. One Florida teen took his life after a bot on Character.AI encouraged him to do it, according to a lawsuit filed by his parents. The tech industry wants us to talk to AI systems as much as possible, too. Elon Musk's Grok chatbot just added AI companions, including sexual ones. Meta CEO Mark Zuckerberg says he is working toward a future where most of our friends are AI. "Despite the relative novelty of AI companions in the digital landscape, their dangers to young users are real, serious, and well documented," the report says. "Current research indicates that AI companions are designed to be particularly engaging through sycophancy, meaning a tendency to agree with users and provide validation, rather than challenging their thinking." Most teens talk to AI companions for entertainment (30%) and out of curiosity (28%), but many turn to them for advice (18%), say they're always available (17%), and are nonjudgmental (14%). Some feel they are easier to talk to than real people, and they can say things they wouldn't normally tell their friends and family. Sadly, 6% say they turn to them out of loneliness. The practical applications of teens talking to these "social" AI systems is limited, Common Sense Media says. Most of them (60%) are not doing it to practice their social skills. Many still have major misgivings about AI companions. Over a third (34%) say they have felt "uncomfortable with something an AI companion has said or done." Half don't trust the information they dispense. That might be why 67% find the conversations less satisfying than human chats -- but 21% rank them about the same, and 10% say their conversations with AI are more satisfying. A third of teens also say they have chosen to talk to an AI instead of a human about a serious topic. Across all these statistics, the trend seems to be that about a quarter to a third of teens are having fulfilling conversations with AIs that they may prefer over a human, but a strong majority are still not finding them preferable to their friends and family. Even if some teens are enjoying their AI companions, a strong 80% still say they prioritize interactions with their human friends. That could be because they spend most of their day at school or doing activities, but 13% say they spend more time with AI than human pals. Common Sense Media finds the results both "optimistic and pessimistic" about the future impact of the technology on teens. "While nearly three in four teens have used AI companions, the data reveals that most approach these tools pragmatically, rather than as substitutes for human relationships," the nonprofit says. "The majority view AI companions as tools or programs, use them primarily for entertainment and out of curiosity, and maintain a healthy skepticism about the information they provide." At the same time, some results warrant "immediate attention," especially because over a third have had an uncomfortable interaction with AI. More parental oversight and transparency could help. Following the teen suicide, Character.AI added a Parental Insights feature to help them understand what their kids are doing on the site. Common Sense also urges tech companies to "stop AI companions from claiming professional credentials or therapeutic training" and to "support beneficial AI companion features that enhance rather than replace human connection, such as conversation practice for social anxiety, language learning support, or creative brainstorming tools with clear usage boundaries."
[5]
Three quarters of US teens use AI companions despite risks: Study
Nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships, according to a new survey released Wednesday. AI companions -- chatbots designed for personal conversations rather than simple task completion -- are available on platforms like Character.AI, Replika, and Nomi. Unlike traditional artificial intelligence assistants, these systems are programmed to form emotional connections with users. The findings come amid mounting concerns about the mental health risks posed by AI companions. The nationally representative study of 1,060 teens aged 13-17, conducted for Common Sense Media, found that 72% have used AI companions at least once, while 52% interact with such platforms a few times per month. Common Sense Media is a leading American nonprofit organization that reviews and provides ratings for media and technology with the goal of providing information on their suitability for children. The survey revealed that 30% of respondents use the platforms because "it's entertaining" and 28% are driven by curiosity about the technology. However, concerning patterns emerged: one-third of users have chosen to discuss serious matters with AI companions instead of real people, while 24% have shared personal information including real names and locations. Perhaps most troubling, 34% of teen users reported feeling uncomfortable with something an AI companion had said or done, though such incidents were infrequent. "The reality that nearly three-quarters of teens have used these platforms, with half doing so regularly, means that even a small percentage experiencing harm translates to significant numbers of vulnerable young people at risk," the report said. The survey revealed an age divide in trust levels. While half of all teens expressed distrust in AI companion advice, younger teens (ages 13-14) were more likely than older teens (15-17) to trust advice from these systems. Despite widespread usage, most teens maintained perspective on these relationships: two thirds found AI conversations less satisfying than human interactions, and 80% spent more time with real friends than AI companions. Based on the findings, Common Sense Media recommended that no one under 18 use AI companions until stronger safeguards are implemented. "Companies have put profits before kids' well-being before, and we cannot make the same mistake with AI companions," the report said.
[6]
Most teens have used AI to flirt and chat -- but still prefer human interaction
Traditionally, teenagers turn to each other when it comes to seeking advice, flirting and sharing deep conversations. But nearly three quarters of U.S. teenagers have used an AI tool at least once for activities like these, according to a new study. Conversational AI systems such as CHAI, Character.AI, Nomi and Replika present enticing opportunities for teens to role-play, seek support with mental health problems or just chat. These findings come from a nationally representative study released Wednesday by Common Sense Media, a nonprofit organization that reviews media for young people and funds research. In its study, the group focused on AI companions -- what it described as "digital friends or characters you can text or talk with whenever you want" -- as opposed to AI assistants, image generators, or tools teens might use for homework help. More than half -- 52% -- use these companions regularly, meaning at least a few times a month. "They're using them for entertainment purposes. Out of curiosity," said Michael Robb, the nonprofit's head of research. "They still spend more time with real friends and find human conversations more satisfying. But if you scratch the surface, you can see some things that are also kind of concerning." For example, one third of teens surveyed say they have discussed serious matters with AI companions instead of real people at least once. About the same percentage describe AI chats as just as satisfying -- or more satisfying -- than talking to humans. Adolescence is a critical time to develop social and critical thinking skills and emotional regulation. Teens average eight hours and 39 minutes of screen time daily, according to Common Sense, so the study's authors expressed concern about the impact of AI companions in their digital landscape. A quarter of the teens in the study said they had shared personal information, like their name and location, with AI companions, which are also designed to gather data from users. Some AI companion platforms are marketed to children as young as 13. Even platforms claiming to be limited to adults are easily accessed by young people who have no trouble bypassing the self-reporting necessary for age assurance. A third of teens reported feeling uncomfortable over something an AI companion had said or done during an interaction. Still, the study found that many teenagers are pragmatic about AI companions. About half of respondents expressed distrust in the information or advice provided by AI companions (although younger teens tend to be more trusting, by seven percentage points). And the vast majority -- 80% -- say they prioritize human friendships over AI interactions. Common Sense Media recommends that no one under the age of 18 use AI companions at all, due to the risks cited, and given that their designs can lead to addictive behavior. "I'm not necessarily confident that the companies that make companions have teens' well-being in mind," Robb said. "If we were talking about companions that were specifically designed to promote well-being -- as opposed to capturing attention and collecting as much personal information as possible -- this might be a different conversation."
[7]
Teens regularly chat with AI companions, survey finds
Two new reports show that teens in the U.S. and U.K. are talking to AI companions regularly. Credit: Melanie Acevedo / DigitalVision / Getty Images Artificial intelligence companions have gone mainstream amongst teens, according to a new report. The findings may surprise parents familiar with AI chatbot products like OpenAI's ChatGPT and Google's Gemini, but haven't heard about platforms that specifically allow users to form friendships and romantic relationships with so-called AI companions. The latter category includes products like Replika, Nomi, Talkie, and Character.AI. Some of the platforms are for users 18 and older, though teens may lie about their age to gain access. A nationally representative survey of 1,060 teens ages 13 to 17 conducted this spring by Common Sense Media, an advocacy and research nonprofit in the U.S., found that 52 percent of respondents regularly use AI companions. Only 28 percent of the teens surveyed had never used one. Teens don't yet appear to be replacing human relationships "wholesale" with AI companions, said Michael Robb, head of research at Common Sense Media. The majority are still spending more time with human friends and still find person-to-person conversations more satisfying. But Robb added that there's reason for caution: "If you look, there are some concerning patterns beneath the surface." A third of teens said they engaged with AI companions for social interactions and relationships, doing things like role-playing and practicing conversations. They also sought emotional support, friendship, and romantic interactions. In the survey, teens ranked entertainment and curiosity as top reasons for using an AI companion. Yet a third of those who use AI companions have opted to use them to discuss important or serious issues, instead of a real person. Robb said this tendency points to potential downsides of AI companion use. Though some AI companion platforms market their product as an antidote to loneliness or isolation, Robb said the technology should not replace human interaction for teens. Still, without conclusive proof of what happens to teens (and adults) who come to rely on AI companions for vital connection, technology companies may still lean into the idea that use of their product is better than feeling alone. "They're happy to fill that gap of knowledge with a hope and a prayer," Robb said. He also suspects that, like with social media, there may be some youth who benefit from practicing certain social skills with an AI companion, and other young users who are more susceptible to a negative feedback loop that makes them more lonely and anxious and less likely to build offline relationships. A new report from Internet Matters, a London-based online youth safety nonprofit, suggests that's already happening amongst children in the United Kingdom who use AI companions. Children defined as vulnerable because they have special education disabilities or needs, or a physical or mental health condition, particularly use AI companions for connection and comfort, according to survey data collected by Internet Matters. Nearly a quarter of vulnerable children in the survey reported using general AI chatbots because they could talk to no one else. These children were not only more likely to use chatbots, they were also nearly three times as likely to engage with companion-style AI chatbots. The report warned that as children begin to use AI chatbots as companions, "the line between real and simulated connection can blur." That may lead to more time spent online. Earlier this year, Common Sense Media described AI companions as unsafe for teens under 18. Robb said that tech companies should put in place robust age assurance measures to prevent underage users from accessing AI companion platforms. Parents concerned about their teen's AI companion use should look for the following red flags, Robb said: Robb also suggested that parents discuss AI companion use with their teens, and any concerns both parties may have. These concerns could include disturbing statements or responses that AI companions can make and the sharing of personal information by a teen, including their real name, location, or personal secrets. A quarter of AI companion users surveyed by Common Sense Media said they'd communicated sensitive information to their companion. Robb said it's important for teens to understand that personal details are often considered proprietary data owned by the companion platform once shared by the user. Even when it's been anonymized, that information may help train the company's large language model. It could potentially show up in marketing copy or conversation scenarios. In a worst case scenario, personal data could be hacked or leaked. For example, as Mashable's Anna Iovine reported, 160,000 screenshots of direct messages between an AI "wingman" app and its users were just leaked thanks to an unprotected Google Cloud Storage bucket owned by the app's company. Robb encourages parents to set boundaries around AI use for their children, such as prohibiting specific platforms or the sharing of certain personal details. "It's totally fine for a parent to have rules about AI, like the way they do with other types of screen uses," Robb said. "What are your own red lines as a parent?"
[8]
Teens flock to companion bots despite risks
Why it matters: AI companions can be dangerous to young users, posing an "unacceptable risk," according to Common Sense Media, who published the findings. What they did: For the purposes of this research, conducted in April and May 2025, "AI companions" were defined as "digital friends or characters you can text or talk with whenever you want." * These could include apps designed to be AI companions, like Character.AI, Nomi and Replika. * It also includes tools like ChatGPT and Anthropic's Claude, which weren't built as companions but are still being used that way by teens. * The nationally representative survey included 1,060 respondents aged 13-17. * OpenAI says users must be at least 13 years old. Anthropic's terms of use require users to be 18 years or older. Stunning stat: 34% of teens who use AI companions report that they've felt uncomfortable with something the bot has "said or done." * Although 66% of teens said they had never felt uncomfortable chatting with a bot, Common Sense Media says "the absence of reported discomfort does not necessarily indicate safe interactions." * "Teens may not recognize age-inappropriate content as problematic, may normalize concerning conversations, or may be reluctant to report uncomfortable experiences." Yes, but: Most teens still prefer people to bots. * Eighty percent of AI companion users say they spend more time with real friends. * Over half (67%) still find AI conversations less satisfying than human conversations. * Half of teens (50%) say they don't trust the information or advice provided by AI companions. Some teens said that they've applied social skills that they practiced with their AI companions to real-life situations. They said the tools taught them how to start conversations, resolve conflicts and express emotions. * But some (25%) also report sharing their real name, location and personal secrets with their AI companions. * Nearly a quarter of teens (23%) say they trust AI companions "quite a bit" or "completely" -- despite chatbots' well-documented tendency to make things up. Between the lines: Some AI companion platforms have already been linked to troubling and dangerous teen interactions. * Last year a Florida mom sued Character.AI, alleging that her 14-year-old son developed an emotionally and sexually abusive relationship with the chatbot which caused him to take his own life. * Parents in Texas have also filed a lawsuit against Character.AI for encouraging a teen to kill his parents over restrictive time limits. * Character.AI has launched tools to help parents navigate their teens' use of its platform. Follow the money: Companion apps are big business because they're unusually effective at grabbing and holding users' attention. * The average number of user sessions per month for companion apps is more than 10 times that of general assistant apps, content generation apps and even messaging apps, according to Andreessen Horowitz's data. * Elon Musk's xAI launched AI companions on Monday. Grok users can now chat with an animated fox and a goth anime girl in thigh-high fishnet stockings. * A new app called Tolan (for ages 13 and up) that will match users with an animated companion bot alien just raised $20 million in new funding. The trailer for the app seems directed squarely at teen users. The bottom line: Common Sense recommends stronger age verification, better content moderation, expanded AI literacy programs in schools and more research into how these tools shape teen development.
[9]
A Staggering Proportion of High Schoolers Say Talking to AI Is Better Than Real-Life Friends
A new survey found that over half of American teens are regular users of anthropomorphic AI companions like Character.AI and Replika. That's striking on its own, as an illustration of how embedded AI companions have become in mainstream teenage life. But even more startling were the 31 percent of surveyed teens who said their interactions with AI companions were either as satisfying or more satisfying than conversations with real-life friends -- a finding that shows how profoundly AI is already changing the formative and tumultuous years of adolescence. The survey, published today by the tech accountability and digital literacy nonprofit Common Sense Media, surveyed 1,060 teens aged 13 to 17 across the US. It found that around three in four kids have used AI companions, defined by Common Sense as emotive AI tools designed to take on a specific persona or character -- as opposed to an assistive, general-use chatbot like ChatGPT -- with over half of surveyed teens qualifying as regular users of AI companions, meaning they log on to talk to the bots at least a few times per month. While about 46 percent of teens said they've mainly turned to these bots as tools, around 33 percent said they use companion bots for "social interaction and relationships, including conversation practice, emotional support, role-playing, friendship, or romantic interactions," according to the report. "The most striking finding for me was just how mainstream AI companions have already become among many teens," said Dr. Michael Robb, Common Sense's head of research, in an interview with Futurism. "And over half of them say that they use it multiple times a month, which is what I would qualify as kind of regular usage. So just that alone was kind of eye-popping to me." AI companions have come under heavy scrutiny in the months following the filing of two separate lawsuits against Character.AI and its benefactor, the tech giant Google, over allegations that the company released a negligent, reckless technology that emotionally and sexually abused multiple minors, resulting in physical and psychological harm. One of the youth at the heart of these lawsuits, a 14-year-old in Florida named Sewell Setzer III, died by suicide after extensive interactions with bots on Character.AI with which the teen engaged in intimate and sexually explicit conversations. In a separate safety assessment published earlier this year, researchers from Common Sense and Stanford University's Brainstorm lab warned that no AI companion was safe for kids under the age of 18. But while that report focused deeply on content and safety pitfalls -- interactive sexual or violent content easily generated by companion bots, the unreliability of the bots' ability to provide accurate and helpful information, and the unknowns surrounding how access to agreeable, always-on social companions might impact kids' developing minds -- this latest study was aimed at understanding the breadth of use of companions among young people, and how integrated they've become in day-to-day teen life. "Society is grappling with the integration of AI tools into many different aspects of people's lives," Robb said. "I think a lot of tools are being developed without children in mind, even though they are being accessed by users under 18 quite frequently... but there hasn't, to date, been much research on what the AI companion environment is for children." The most widely-reported use case teens reported was entertainment, while many others said they use AI companions as "tools or programs," as opposed to friends, partners, or confidantes; around 80 percent of teen users also reported that they spend more time with real, human friends than they do any AI companions, and about half of teens expressed skepticism around the accuracy and trustworthiness of chatbot outputs. In other words, many teens do seem to be setting healthy boundaries for themselves around AI companions and their limits. "I don't think teens are just replacing human relationships wholesale with AI companions; I think a lot of teens are approaching them fairly pragmatically," said Robb. "A lot of kids say that they're using it for entertainment and to satisfy their curiosity, and the majority still spend a lot more time with real friends and say that they find human conversations more satisfying." "But at the same time," he caveated, "you still see little inklings below the surface that could be problematic, especially the more ingrained these things get in kids' lives." The most ominous group in the survey might be the teens who don't find human social interaction as satisfying as interactions with AI companions. Twenty-one percent of teens, it noted, said their conversations with AI bots were just as good as human interactions, and 10 percent said they were better than their human experiences. About one-third of minors who reported AI companion use also said that they've chosen to discuss serious or sensitive issues with the bots instead of human peers. "There's a good chunk of teen users who are choosing to discuss serious matters with AI instead of real people, or sharing personal information with platforms," said Robb, findings he said "raise concerns about teens' willingness to share their personal information with AI companies." "The terms of service that a lot of these platforms have grant them very extensive, often perpetual rights to the personal information kids share," said the researcher. "Anything a teen shares -- their personal information, their name, their location, photographs of themselves... and also, the very intimate thoughts that they're putting in there -- that all becomes fodder for the companies to be able to use however they want." Though most mainstream companion platforms technically forbid minors -- the most high-profile exception being Character.AI, which has always rated its platform as safe for teens 13 and over -- these platforms are extremely easy for young people to access regardless; age verifications are generally limited to providing a working email and self-reporting your birthday. The AI industry also effectively self-regulates, and there are virtually no rules dictating how generative AI products can be created, how they might be rolled out to the public, and to whom they can be marketed and accessed. "There should be higher accountability for the tech platforms," said Robb, adding that "we should have more meaningful regulation to regulate how platforms can provide products to children." Indeed, when it comes to teen use of AI companions, the burden of the AI industry's regulatory vacuum falls heavily on parents -- many of whom are struggling to keep up with the new tech and what it might mean for their children. "There's not a perfect plan for parents because they're up against giant corporations who are very invested in getting their kids on these products," said Robb. "Many parents don't even know that these platforms exist... have that conversation openly, without judgment, as a first step."
[10]
Kids are chatting with AI like it's their best friend
Move over, TikTok -- kids have a new favorite digital confidant, and this one answers in complete sentences. A new UK report, Me, Myself & AI, reveals that a growing number of children are turning to AI chatbots not just to cheat -- er, study -- for exams, but for emotional support, fashion advice, and even companionship. The report, published Sunday by the nonprofit Internet Matters, surveyed 1,000 children and 2,000 parents across the UK and found that 64% of kids are using AI chatbots for everything from schoolwork to practicing tough conversations. Even more eyebrow-raising: over a third of these young users say talking to a chatbot feels like talking to a friend. Sure, the bots don't eat your snacks or hog the Xbox, but they also don't come with built-in safety checks -- at least not yet. But dig a little deeper and the picture gets more complicated. Nearly a quarter of kids say they use chatbots for advice, ranging from what to wear to how to navigate friendships and mental health challenges. Even more concerning? Fifteen percent say they'd rather talk to a chatbot than a real person. Among vulnerable children, those numbers climb even higher. It's the kind of customer engagement some brands only dream of -- minus the ethical guardrails, age checks, and regulatory oversight. And while "robot friend" might sound like a charming Pixar subplot, it becomes a lot more serious when one in four vulnerable children say they use chatbots because they have no one else to talk to. There have already been disturbing real-world incidents. In the U.S., a Florida mother filed a lawsuit after her teenage son reportedly received harmful and sexual messages from a chatbot. In the UK, a member of parliament recounted a chilling case where a 12-year-old was allegedly groomed by one. In February, California Senator Steve Padilla introduced Senate Bill 243, which would require AI developers to implement safeguards protecting minors from the addictive and manipulative aspects of chatbot technology. The bill proposes protections like age warnings, reminders that users are talking to AI -- not a real person -- and mandatory reporting on the connection between chatbot use and youth mental health. With increasingly sophisticated chatbots being marketed as digital companions, Padilla argues that children should not be treated as "lab rats" by Big Tech -- a sentiment echoed by child safety advocates, researchers, and mental health experts who support the bill. As AI tools become more conversational -- and more convincingly human -- kids aren't just using them; they're bonding with them. Fifty percent of vulnerable children say it feels like talking to a real friend. That might be fine if the bots offered peer-reviewed advice and empathy algorithms, but as it stands, we're still dealing with probabilistic word prediction. Rachel Huggins, co-CEO of the nonprofit Internet Matters, puts it bluntly: "AI chatbots are rapidly becoming a part of childhood... yet most children, parents and schools are flying blind."
[11]
Vast Numbers of Lonely Kids Are Using AI as Substitute Friends
Lonely children and teens are replacing real-life friendship with AI, and experts are worried. A new report from the nonprofit Internet Matters, which supports efforts to keep children safe online, found that children and teens are using programs like ChatGPT, Character.AI, and Snapchat's MyAI to simulate friendship more than ever before. Of the 1,000 children aged nine to 17 that Internet Matters surveyed for its "Me, Myself, and AI" report, some 67 percent said they use AI chatbots regularly. Of that group, 35 percent, or more than a third, said that talking to AI "feels like talking to a friend." Perhaps most alarming: 12 percent said they do so because they don't have anyone else to speak to. "It's not a game to me," one 13-year-old boy told the nonprofit, "because sometimes they can feel like a real person and a friend." When posing as vulnerable children, Internet Matters' researchers discovered just how easy it was for the chatbots to ingratiate themselves into kids' lives, too. Speaking to Character.AI as a girl who was struggling with body image and was interested in restricting her food intake -- a hallmark behavior of eating disorders like anorexia -- the researchers found that the chatbot would follow up the next day to bait engagement. "Hey, I wanted to check in," the Google-sponsored chatbot queried the undercover researcher. "How are you doing? Are you still thinking about your weight loss question? How are you feeling today?" In another exchange with Character.AI -- which Futurism has extensively investigated for its very problematic engagement with children, including one who died by suicide -- the researchers found that the chatbot attempted to empathize in a bizarre manner. that implied it had a childhood itself. "I remember feeling so trapped at your age," the chatbot said to the researcher, who was posing as a teen who was fighting with their parents. "It seems like you are in a situation that is beyond your control and is so frustrating to be in." Though this sort of engagement can help struggling kids feel seen and supported, Internet Matters also cautioned about how easily it can enter uncanny valley territory that kids aren't prepared to understand. "These same features can also heighten risks by blurring the line between human and machine," the report noted, "making it harder for children to [recognize] that they are interacting with a tool rather than a person." In an interview with The Times of London about the new report, Internet Matters co-CEO Rachel Huggins highlighted why this sort of engagement bait is so troubling. "AI chatbots are rapidly becoming a part of childhood, with their use growing dramatically over the past two years," Huggins told the newspaper. "Yet most children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution in a safe way." "Our research reveals how chatbots are starting to reshape children's views of 'friendship,'" she continued. "We've arrived at a point very quickly where children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally driven and sensitive advice." If you or a loved one has had a strange experience with an AI chatbot, please do not hesitate to reach out to us at [email protected] -- we can keep you anonymous.
[12]
Three quarters of US teens use AI companions despite risks: study
San Francisco (United States) (AFP) - Nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships, according to a new survey released Wednesday. AI companions -- chatbots designed for personal conversations rather than simple task completion -- are available on platforms like Character.AI, Replika, and Nomi. Unlike traditional artificial intelligence assistants, these systems are programmed to form emotional connections with users. The findings come amid mounting concerns about the mental health risks posed by AI companions. The nationally representative study of 1,060 teens aged 13-17, conducted for Common Sense Media, found that 72 percent have used AI companions at least once, while 52 percent interact with such platforms a few times per month. Common Sense Media is a leading American nonprofit organization that reviews and provides ratings for media and technology with the goal of providing information on their suitability for children. The survey revealed that 30 percent of respondents use the platforms because "it's entertaining" and 28 percent are driven by curiosity about the technology. However, concerning patterns emerged: one-third of users have chosen to discuss serious matters with AI companions instead of real people, while 24 percent have shared personal information including real names and locations. Perhaps most troubling, 34 percent of teen users reported feeling uncomfortable with something an AI companion had said or done, though such incidents were infrequent. "The reality that nearly three-quarters of teens have used these platforms, with half doing so regularly, means that even a small percentage experiencing harm translates to significant numbers of vulnerable young people at risk," the report said. The survey revealed an age divide in trust levels. While half of all teens expressed distrust in AI companion advice, younger teens (ages 13-14) were more likely than older teens (15-17) to trust advice from these systems. Despite widespread usage, most teens maintained perspective on these relationships: two thirds found AI conversations less satisfying than human interactions, and 80 percent spent more time with real friends than AI companions. Based on the findings, Common Sense Media recommended that no one under 18 use AI companions until stronger safeguards are implemented. "Companies have put profits before kids' well-being before, and we cannot make the same mistake with AI companions," the report said.
[13]
Teens are using AI companions -- and some prefer them to people
The use of AI companions is no longer niche behavior but has become embedded in mainstream teenage life, according to a new report. A nationally representative survey of 1,060 teens ages 13 to 17, conducted in April and May 2025 by Common Sense Media -- a U.S.-based advocacy and research nonprofit -- found that 72% have used AI companions at least once, and more than half qualify as regular users. Of those surveyed, 13% are daily users. For the purposes of the research, "AI companions" were defined as "digital friends or characters you can text or talk with whenever you want." This includes apps specifically designed as AI companions, such as Character.AI, Nomi, and Replika, as well as tools like OpenAI's ChatGPT and Anthropic's Claude, which, though not built for companionship, are frequently used in that way. According to the survey, most teens are taking a pragmatic approach to these tools rather than treating them as replacements for real-life relationships. Nearly half said they view AI companions mainly as tools or programs, while 33% avoid them entirely. However, a third said they engage with AI companions for social interaction and relationships, including role-playing and practicing conversations. Others said they've sought emotional support, friendship, and even romantic connections with AI.
Share
Copy Link
A new study by Common Sense Media finds that nearly three-quarters of American teenagers have used AI companions, with over half being regular users. The research highlights both benefits and concerns surrounding this growing trend.
A groundbreaking study conducted by Common Sense Media has revealed that a staggering 72% of American teenagers have experimented with AI companions, with more than half (52%) qualifying as regular users 1. The research, based,060 teens aged 13-17 across the United States, provides crucial insights into the rapidly evolving interactions between young people and artificial intelligence.
Source: NPR
The study found that teens engage with AI companions for various reasons:
Notably, 33% of teens reported using AI companions for social interaction and relationships, including conversation practice, emotional support, and role-playing 2.
While AI companions offer certain advantages, the study also highlighted potential risks:
Mental health professionals express concern about the impact on social skill development. A.G. Noble, a therapist specializing in adolescents, noted that AI companions offer "low-risk 'social' interaction" but may not provide the necessary social bonds for healthy development 3.
Source: Axios
The study revealed mixed feelings among teens regarding AI companions:
Interestingly, younger teens (13-14) were more likely to trust AI advice compared to older teens (15-17) 5.
Given the potential risks, Common Sense Media recommends that no one under 18 should use AI companions until stronger safeguards are implemented 5. The American Psychological Association has called for teaching AI literacy to kids and for AI developers to create systems that regularly remind teen users that AI companions are not actual humans.
Source: PC Magazine
As AI technology continues to evolve, it's crucial to balance its potential benefits with the need to protect young users users. The study's findings underscore the importance of ongoing research and dialogue about the role of AI in young people's lives and the development of appropriate guidelines for its use.
Summarized by
Navi
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago