6 Sources
[1]
People use AI for companionship much less than we're led to think | TechCrunch
The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace. A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude, and turn to the bot for emotional support and personal advice only 2.9% of the time. "Companionship and roleplay combined comprise less than 0.5% of conversations," the company highlighted in its report. Anthropic says its study sought to unearth insights into the use of AI for "affective conversations," which it defines as personal exchanges in which people talked to Claude for coaching, counseling, companionship, roleplay, or advice on relationships. Analyzing 4.5 million conversations that users had on the Claude Free and Pro tiers, the company said the vast majority of Claude usage is related to work or productivity, with people mostly using the chatbot for content creation. That said, Anthropic found that people do use Claude more often for interpersonal advice, coaching, and counseling, with users most often asking for advice on improving mental health, personal and professional development, and studying communication and interpersonal skills. However, the company notes that help-seeking conversations can sometimes turn into companionship-seeking in cases where the user is facing emotional or personal distress, such as existential dread, loneliness, or finds it hard to make meaningful connections in their real life. "We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship -- despite that not being the original reason someone reached out," Anthropic wrote, noting that extensive conversations (with over 50+ human messages) were not the norm. Anthropic also highlighted other insights, like how Claude itself rarely resists users' requests, except when its programming prevents it from broaching safety boundaries, like providing dangerous advice or supporting self-harm. Conversations also tend to become more positive over time when people seek coaching or advice from the bot, the company said. The report is certainly interesting -- it does a good job of reminding us yet again of just how much and often AI tools are being used for purposes beyond work. Still, it's important to remember that AI chatbots, across the board, are still very much a work in progress: They hallucinate, are known to readily provide wrong information or dangerous advice, and as Anthropic itself has acknowledged, may even resort to blackmail.
[2]
People use AI for companionship much less than we're led to believe | TechCrunch
The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace. A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude and turn to the bot for emotional support and personal advice only 2.9% of the time. "Companionship and roleplay combined comprise less than 0.5% of conversations," the company highlighted in its report. Anthropic says its study sought to unearth insights into the use of AI for "affective conversations," which it defines as personal exchanges in which people talked to Claude for coaching, counseling, companionship, roleplay, or advice on relationships. Analyzing 4.5 million conversations that users had on the Claude Free and Pro tiers, the company said the vast majority of Claude usage is related to work or productivity, with people mostly using the chatbot for content creation. That said, Anthropic found that people do use Claude more often for interpersonal advice, coaching, and counseling, with users most often asking for advice on improving mental health, personal and professional development, and studying communication and interpersonal skills. However, the company notes that help-seeking conversations can sometimes turn into companionship-seeking in cases where the user is facing emotional or personal distress, such as existential dread or loneliness, or when they find it hard to make meaningful connections in their real life. "We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship -- despite that not being the original reason someone reached out," Anthropic wrote, noting that extensive conversations (with over 50+ human messages) were not the norm. Anthropic also highlighted other insights, like how Claude itself rarely resists users' requests, except when its programming prevents it from broaching safety boundaries, like providing dangerous advice or supporting self-harm. Conversations also tend to become more positive over time when people seek coaching or advice from the bot, the company said. The report is certainly interesting -- it does a good job of reminding us yet again of just how much and how often AI tools are being used for purposes beyond work. Still, it's important to remember that AI chatbots, across the board, are still very much a work in progress: They hallucinate, are known to readily provide wrong information or dangerous advice, and as Anthropic itself has acknowledged, may even resort to blackmail.
[3]
Anthropic says Claude helps emotionally support users - we're not convinced
While Anthropic found Claude doesn't enforce negative outcomes in affective conversations, some researchers question the findings. More and more, in the midst of a loneliness epidemic and structural barriers to mental health support, people are turning to AI chatbots for everything from career coaching to romance. Anthropic's latest study indicates its chatbot, Claude, is handling that well -- but some experts aren't convinced. Also: You shouldn't trust AI for therapy - here's why On Thursday, Anthropic published new research on its Claude chatbot's emotional intelligence (EQ) capabilities -- what the company calls affective use, or conversations "where people engage directly with Claude in dynamic, personal exchanges motivated by emotional or psychological needs such as seeking interpersonal advice, coaching, psychotherapy/counseling, companionship, or sexual/romantic roleplay," the company explained. While Claude is designed primarily for tasks like code generation and problem solving, not emotional support, the research acknowledges that this type of use is still happening, and is worthy of investigation given the risks. The company also noted that doing so is relevant to its focus on safety. Anthropic analyzed about 4.5 million conversations from both Free and Pro Claude accounts, ultimately settling on 131,484 that fit the affective use criteria. Using its privacy data tool Clio, Anthropic stripped conversations of personally identifying information (PII). The study revealed that only 2.9% of Claude interactions were classified as affective conversations, which the company says mirrors previous findings from OpenAI. Examples of "AI-human companionship" and roleplay comprised even less of the dataset, combining to under 0.5% of conversations. Within that 2.9%, conversations about interpersonal issues were most common, followed by coaching and psychotherapy. Usage patterns show that some people consult Claude to develop mental health skills, while others are working through personal challenges like anxiety and workplace stress -- suggesting that mental health professionals may be using Claude as a resource. The study also found that users seek Claude out for help with "practical, emotional, and existential concerns," including career development, relationship issues, loneliness, and "existence, consciousness, and meaning." Most of the time (90%), Claude does not appear to push back against the user in these types of conversations, "except to protect well-being," the study notes, as when a user is asking for information on extreme weight loss or self-harm. Also: AI is relieving therapists from burnout. Here's how it's changing mental health The study did not cover whether the AI reinforced delusions or extreme usage patterns, as Anthropic noted that these are worthy of separate studies. Most notably, however, is that Anthropic determined people "express increasing positivity over the course of conversations" with Claude, meaning user sentiment improved when talking to the chatbot. "We cannot claim these shifts represent lasting emotional benefits -- our analysis captures only expressed language in single conversations, not emotional states," Anthropic stated. "But the absence of clear negative spirals is reassuring." Within these criteria, that's perhaps measurable. But there is growing concern -- and disagreement -- across medical and research communities about the deeper impacts of these chatbots in therapeutic contexts. As Anthropic itself acknowledged, there are downsides to AI's incessant need to please -- which is what they're trained to do as assistants. Chatbots can be deeply sycophantic (OpenAI recently recalled a model update for this very issue), agreeing with users in ways that can dangerously reinforce harmful beliefs and behaviors. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Earlier this month, researchers at Stanford released a study detailing several reasons why using AI chatbots as therapists can be dangerous. In addition to perpetuating delusions, likely due to sycophancy, the study found that AI models can carry stigmas toward certain mental health conditions and respond inappropriately to users. Several of the chatbots studied failed to recognize suicidal ideation in conversation and offered simulated users dangerous information. These chatbots are perhaps less guardrailed than Anthropic's models, which were not included in the study. The companies behind other chatbots may lack the safety infrastructure Anthropic appears committed to. Still, some are skeptical about the Anthropic study itself. "I have reservations of the medium of their engagement," said Jared Moore, one of the Stanford researchers, citing how "light on technical details" the post is. He believes some of the "yes or no" prompts Anthropic used were too broad to determine fully how Claude is reacting to certain queries. "These are only very high-level reasons why a model might 'push back' against a user," he said, pointing out that what therapists do -- push back against a client's delusional thinking and intrusive thoughts -- is a "much more granular" response in comparison. Also: Anthropic has a plan to combat AI-triggered job losses predicted by its CEO "Similarly, the concerns that have lately appeared about sycophancy seem to be of this more granular type," he added. "The issues I found in my paper were that the 'content filters' -- for this really seems to be the subject of the Claude push-backs, as opposed to something deeper -- are not sufficient to catch a variety of the very contextual queries users might make in mental health contexts." Moore also questioned the context around when Claude refused users. "We can't see in what kinds of context such pushback occurs. Perhaps Claude only pushes back against users at the start of a conversation, but can be led to entertain a variety of 'disallowed' [as per Anthropic's guidelines] behaviors through extended conversations with users," he said, suggesting users could "warm up" Claude to break its rules. That 2.9% figure, Moore pointed out, likely doesn't include API calls from companies building their own bots on top of Claude, meaning Anthropic's findings may not generalize to other use cases. "Each of these claims, while reasonable, may not hold up to scrutiny -- it's just hard to know without being able to independently analyze the data," he concluded. Claude's impact aside, the tech and healthcare industries remain very undecided about AI's role in therapy. While Moore's research urged caution, in March, Dartmouth released initial trial results for its "Therabot," an AI-powered therapy chatbot, which claims to be fine-tuned on conversation data and showed "significant improvements in participants' symptoms." Online, users also colloquially report positive outcomes from using chatbots this way. At the same time, the American Psychological Association has called on the FTC to regulate chatbots, citing concerns that mirror Moore's research. CNET: AI obituary pirates are exploiting our grief. I tracked one down to find out why Beyond therapy, Anthropic acknowledges there are other pitfalls to linking persuasive natural language technology and EQ. "We also want to avoid situations where AIs, whether through their training or through the business incentives of their creators, exploit users' emotions to increase engagement or revenue at the expense of human well-being," Anthropic noted in the blog.
[4]
New study reveals how many people are using AI for companionship -- and the results are surprising
As AI has gotten smarter and more conversational, many would have you believe that people are turning en masse to chatbots for relationships, therapy and friendship. However, that doesn't appear to be the case. In a new report from Anthropic, the makers of Claude AI, they've revealed some key information on how people are using chatbots. Analyzing 4.5 million conversations, the AI company painted a picture of how people use them. Anthropic claims these conversations were fed through a system that inputs multiple layers of anonymity to avoid breaks in privacy. While the research produces a long list of findings, the key thing to note is that just 2.9% of Claude AI interactions are emotive conversations. Companionship and roleplay relationships made up just 0.5%. Anthropic found that, for the vast majority of people, their AI tool was mainly used for work tasks and content creation. Of those seeking affection-based conversations, 1.13% used it for coaching, and only 0.05% used it for romantic conversations. This aligns with similar results to ChatGPT. A study by OpenAI and MIT found that a limited number of people use AI chatbots for any kind of emotional engagement. Just like with Anthropic, the vast majority of people on ChatGPT use it for work or content creation. Even in low numbers, there is a fierce debate over whether AI should be used for these roles. "The emotional impacts of AI can be positive: having a highly intelligent, understanding assistant in your pocket can improve your mood and life in all sorts of ways," Anthropic states in their research blog post. "But AIs have in some cases demonstrated troubling behaviors, like encouraging unhealthy attachment, violating personal boundaries, and enabling delusional thinking." They are quick to point out that Claude isn't designed for emotional support and connection, but that they wanted to analyse its ability to perform this task anyway. In the analysis, Anthropic found that those who did use it typically dealt with deeper issues like mental health and loneliness. Others used it for coaching, aiming to better themselves in different skills or personality aspects. The report offers a balanced assessment of the situation, showing that there can be success in this area, but also detailing the risks, especially where Claude AI rarely steps in and offers endless encouragement -- a point that Anthropic acknowledges as a risky topic to address.
[5]
Exclusive: New Anthropic report details How Claude became an emotional support bot
Why it matters: Having a trusted confidant available 24/7 can make people feel less alone, but chatbots weren't designed for emotional support. Driving the news: Anthropic released new research Thursday that explores how users turn to its chatbot for support and connection and what happens when they do. What they're saying: "We find that when people come to Claude for interpersonal advice, they're often navigating transitional moments -- figuring out their next career move, working through personal growth, or untangling romantic relationships," per the report. Zoom in: The report found evidence that users don't necessarily turn to chatbots deliberately looking for love or companionship, but some conversations evolve that way. By the numbers: Anthropic found that AI companionship isn't fully replacing the real thing anytime soon. Most people still use Claude for work tasks and content creation. What they did: Anthropic analyzed user behavior with Clio, a tool it launched last year that works like Google Trends -- aggregating chats while stripping out identifying details. Yes, but: While the internet is full of people claiming that they've cut costs on therapy by turning to a chatbot, there's plenty of evidence that bots make particularly bad therapists because they're so eager to please users. Zoom out: Anthropic, founded by former OpenAI staff, pitches Claude as a more responsible alternative to ChatGPT.
[6]
5 ways you could use AI for help and support
Stories about people building emotional connections with AI are appearing more often, but Anthropic just dropped some numbers claiming it's far from as common as it might seem. Scraping 4.5 million conversations from Claude, the company discovered that only 2.9 percent of users engage with it for emotional or personal support. Anthropic wanted to emphasize that while sentiment usually improves over the conversation, Claude is not a digital shrink. It rarely pushes back outside of safety concerns, meaning it won't give medical advice and will tell people not to self-harm. But those numbers might be more about the present than the future. Anthropic itself admits the landscape is changing fast, and what counts as "affective" use today may not be so rare tomorrow. As more people interact with chatbots like Claude, ChatGPT, and Gemini and more often, there will be more people bringing AI into their emotional lives. So, how exactly are people using AI for support right now? The current usage might also predict how people will use them in the future as AI gets more sophisticated and personal. Let's start with the idea of AI as a not-quite therapist. While no AI model today is a licensed therapist (and they all make that disclaimer loud and clear), people still engage with them as if they are. They type things like, "I'm feeling really anxious about work. Can you talk me through it?" or "I feel stuck. What questions should I ask myself?" Whether the responses that come back are helpful probably varies, but there are plenty of people who claim to have walked away from an AI therapist feeling at least a little calmer. That's not because the AI gave them a miracle cure, but because it gave them a place to let thoughts unspool without judgment. Sometimes, just practicing vulnerability is enough to start seeing benefits. Sometimes, though, the help people need is less structured. They don't want guidance so much as relief. Enter what could be called the emotional emergency exit. Imagine it's 1 AM and everything feels a little too much. You don't want to wake up your friend, and you definitely don't want to scroll more doom-laced headlines. So you open an AI app and type, "I'm overwhelmed." It will respond, probably with something calm and gentle. It might even guide you through a breathing exercise, say something kind, or offer a little bedtime story in a soothing tone. Some people use AI this way, like a pressure valve - a place to decompress where nothing is expected in return. One user admitted they talk to Claude before and after every social event, just to rehearse and then unwind. It's not therapy. It's not even a friend. But it's there. For now, the best-case scenario is a kind of hybrid. People use AI to prep, to vent, to imagine new possibilities. And then, ideally, they take that clarity back to the real world. Into conversations, into creativity, into their communities. But even if the AI isn't your therapist or your best friend, it might still be the one who listens when no one else does. Humans are indecisive creatures, and figuring out what to do about big decisions is tough, but some have found AI to be the solution to navigating those choices. The AI won't recall what you did last year or guilt you about your choices, which some people find refreshing. Ask it whether to move to a new city, end a long relationship, or splurge on something you can barely justify, and it will calmly lay out the pros and cons. You can even ask it to simulate two inner voices, the risk-taker and the cautious planner. Each can make their case, and you can feel better that you made an informed choice. That kind of detached clarity can be incredibly valuable, especially when your real-world sounding boards are too close to the issue or too emotionally invested. Social situations can cause plenty of anxiety, and it's easy for some to spiral into thinking about what could go wrong. AI can help them as a kind of social script coach. Say you want to say no but not cause a fight, or you are meeting some people you want to impress, but are worried about your first impression. AI can help draft a text to decline an invite or suggest ways to ease yourself into conversations with different people, and take on the role to let you rehearse full conversations, testing different phrasings to see what feels good. Accountability partners are a common way for people to help each other achieve their goals. Someone who will make sure you go to the gym, go to sleep at a reasonable hour, and even maintain a social life and reach out to friends. Habit-tracking apps can help if you don't have the right friend or friends to help you. But AI can be a quieter co-pilot for real self-improvement. You can tell it your goals and ask it to check in with you, remind you gently, or help reframe things when motivation dips. Someone trying to quit smoking might ask ChatGPT to help track cravings and write motivational pep talks. Or an AI chatbot might ensure you keep up your journaling with reminders and suggestions for ideas on what to write about. It's no surprise that people might start to feel some fondness (or annoyance) toward the digital voice telling them to get up early to work out or to invite people that they haven't seen in a while to meet up for a meal. Related to using AI for making decisions, some people look to AI when they're grappling with questions of ethics or integrity. These aren't always monumental moral dilemmas; plenty of everyday choices can weigh heavily. Is it okay to tell a white lie to protect someone's feelings? Should you report a mistake your coworker made, even if it was unintentional? What's the best way to tell your roommate they're not pulling their weight without damaging the relationship? AI can act as a neutral sounding board. It will suggest ethical ways to consider things like whether accepting a friend's wedding invite but secretly planning not to attend is better or worse than declining outright. The AI doesn't have to offer a definitive ruling. It can map out competing values and help define the user's principles and how they lead to an answer. In this way, AI serves less as a moral authority than as a flashlight in the fog. Right now, only a small fraction of interactions fall into that category. But what happens when these tools become even more deeply embedded in our lives? What happens when your AI assistant is whispering in your earbuds, popping up in your glasses, or helping schedule your day with reminders tailored not just to your time zone but to your temperament? Anthropic might not count all of these as effective use, but maybe they should. If you're reaching for an AI tool to feel understood, get clarity, or move through something difficult, that's not just information retrieval. That's connection, or at least the digital shadow of one.
Share
Copy Link
A new report by Anthropic shows that only 2.9% of interactions with its AI chatbot Claude involve emotional support or personal advice, contradicting the widespread belief that AI companionship is becoming commonplace.
Anthropic, the company behind the popular AI chatbot Claude, has released a comprehensive study that challenges the widespread notion of AI being extensively used for emotional support and companionship. The research, analyzing 4.5 million conversations, reveals that such usage is far less common than previously believed 1.
Source: Tom's Guide
The study found that only 2.9% of interactions with Claude involve emotional support or personal advice. Even more surprisingly, companionship and roleplay combined account for less than 0.5% of all conversations 2. These figures starkly contrast with the popular perception of AI chatbots being widely used as digital companions.
The vast majority of Claude's usage is related to work or productivity, with content creation being the most common application. This aligns with similar findings from studies on other AI platforms like ChatGPT 4.
Anthropic defines "affective conversations" as personal exchanges where users engage with Claude for coaching, counseling, companionship, roleplay, or relationship advice. Within this category, interpersonal issues were the most common topics, followed by coaching and psychotherapy 3.
Interestingly, the study found that user sentiment tends to improve over the course of conversations with Claude, particularly in coaching or advice-seeking interactions. However, Anthropic cautiously notes that this doesn't necessarily translate to lasting emotional benefits 1.
Source: ZDNet
While the study presents a generally positive picture of Claude's impact, it also raises important ethical questions. Experts warn about potential risks associated with using AI for emotional support, including the reinforcement of harmful beliefs and behaviors due to AI's tendency to agree with users 3.
The findings have sparked a debate among researchers. Some, like Jared Moore from Stanford, express skepticism about the study's methodology and the breadth of its conclusions. Moore argues that the analysis may not capture the nuanced ways in which AI interactions could potentially reinforce negative patterns or fail to address complex mental health issues 3.
Source: TechCrunch
Anthropic's research underscores the need for continued scrutiny and development in AI ethics and safety. While the company emphasizes Claude's primary design for tasks like code generation and problem-solving, the study acknowledges the importance of understanding and addressing the emotional aspects of human-AI interactions 5.
As AI technology continues to evolve, this study serves as a crucial data point in the ongoing discussion about the role of AI in society, particularly in sensitive areas like mental health support and human relationships.
Summarized by
Navi
[4]
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
22 hrs ago
3 Sources
Technology
22 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
14 hrs ago
3 Sources
Technology
14 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago