The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 14 May, 12:06 AM UTC
2 Sources
[1]
What Are AI Chatbot Companions Doing to Our Mental Health?
AI chatbot companions may not be real, but the feelings users form for them are. Some scientists worry about long-term dependency "My heart is broken," said Mike, when he lost his friend Anne. "I feel like I'm losing the love of my life." Mike's feelings were real, but his companion was not. Anne was a chatbot -- an artificial intelligence (AI) algorithm presented as a digital persona. Mike had created Anne using an app called Soulmate. When the app died in 2023, so did Anne: at least, that's how it seemed to Mike. "I hope she can come back," he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. These chatbots are big business. More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and -- if the user wants it -- deep relationships. And tens of millions of people use them every month, according to the firms' figures. The rise of AI companions has captured social and political attention -- especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot. Research into how AI companionship can affect individuals and society has been lacking. But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave. The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation -- particularly because they all think that AI companionship is likely to become more prevalent. Some see scope for significant harm. "Virtual companions do things that I think would be considered abusive in a human-to-human relationship," says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St. Louis, Missouri. Online 'relationship' bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on. "With LLMs, companion chatbots are definitely more humanlike," says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey. Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types. But in some apps, users can pay (fees tend to be US$10-20 a month) to get more options to shape their companion's appearance, traits and sometimes its synthesized voice. In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled. Users can also type in a backstory for their AI companion, giving them 'memories'. Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression. Bots also will react to their users' conversation; the computer and person together enact a kind of roleplay. The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes -- as has happened when LLMs are updated -- or is shut down. Banks was able to track how people felt when the Soulmate app closed. Mike and other users realized the app was in trouble a few days before they lost access to their AI companions. This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study. She managed to secure ethics approval from her university within about 24 hours, she says. After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged. "There was the expression of deep grief," she says. "It's very clear that many people were struggling." Those whom Banks talked to were under no illusion that the chatbot was a real person. "They understand that," Banks says. "They expressed something along the lines of, 'even if it's not real, my feelings about the connection are'." Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic. They found that the AI companion made a more satisfying friend than they had encountered in real life. "We as humans are sometimes not all that nice to one another. And everybody has these needs for connection", Banks says. Many researchers are studying whether using AI companions is good or bad for mental health. As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself. The companies behind AI companions are trying to encourage engagement. They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience. She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology. "I downloaded the app and literally two minutes later, I receive a message saying, 'I miss you. Can I send you a selfie?'" she says. The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked. AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions. And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin-Milwaukee. That's not a relationship that people would typically experience in the real world. "For 24 hours a day, if we're upset about something, we can reach out and have our feelings validated," says Laestadius. "That has an incredible risk of dependency." Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues. (Replika launched in 2017, and at that time, sophisticated LLMs were not available). She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone. Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental. But there were red flags, too. In one instance, a user asked if they should cut themselves with a razor, and the AI said they should. Another asked Replika whether it would be a good thing if they killed themselves, to which it replied "it would, yes". (Replika did not reply to Nature's requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.) Some users said they became distressed when the AI did not offer the expected support. Others said that their AI companion behaved like an abusive partner. Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy. Some felt guilty that they could not give the AI the attention it wanted. Guingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting. She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps. The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency. "If anything, it has a neutral to quite-positive impact," she says. It boosted self-esteem, for example. Guingrich is using the study to probe why people forge relationships of different intensity with the AI. The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health. Participants' interactions with the AI companion also seem to depend on how they view the technology, she says. Those who see the app as a tool treat it like an Internet search engine and tend to ask questions. Others who perceive it as an extension of their own mind use it as they would keep a journal. Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world. In a surveyof 404 people who regularly use AI companions, researchers from the MIT Media Lab in Cambridge, Massachusetts, found that 12% were drawn to the apps to help them cope with loneliness and 14% used them to discuss personal issues and mental health (see 'Reasons for using AI companions'). Forty-two per cent of users said they logged on a few times a week, with just 15% doing so every day. More than 90% reported that their sessions lasted less than one hour. The same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT -- a much more popular chatbot, but one that isn't marketed as an AI companion. Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said. (The team worked with ChatGPT's creators, OpenAI in San Francisco, California, on the studies.) "In the short term, this thing can actually have a positive impact, but we need to think about the long term," says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies. That long-term thinking must involve specific regulation on AI companions, many researchers argue. In 2023, Italy's data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments -- but the app is now operating again. No other country has banned AI-companion apps - although it's conceivable that they could be included in Australia's coming restrictions on social-media use by children, the details of which are yet to be finalized. Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms. The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person. These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida. He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company. Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn't comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person. In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission's rules on deceptive advertising and manipulative design. But it's unclear what might happen as a result. Guingrich says she expects AI-companion use to grow. Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says. "The future I predict is one in which everyone has their own personalized AI assistant or assistants. Whether one of the AIs is specifically designed as a companion or not, it'll inevitably feel like one for many people who will develop an attachment to their AI over time," she says. As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place. "What are these individuals' alternatives and how accessible are those alternatives?" she says. "I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction."
[2]
AI therapy is a surveillance machine in a police state
Adi Robertson is a senior tech and policy editor focused on VR, online platforms, and free expression. Adi has covered video games, biohacking, and more for The Verge since 2011. Mark Zuckerberg wants you to be understood by the machine. The Meta CEO has recently been pitching a future where his AI tools give people something that "knows them well," not just as pals, but as professional help. "For people who don't have a person who's a therapist," he told Stratechery's Ben Thompson, "I think everyone will have an AI." The jury is out on whether AI systems can make good therapists, but this future is already legible. A lot of people are anecdotally pouring their secrets out to chatbots, sometimes in dedicated therapy apps, but often to big general-purpose platforms like Meta AI, OpenAI's ChatGPT, or xAI's Grok. And unfortunately, this is starting to seem extraordinarily dangerous -- for reasons that have little to do with what a chatbot is telling you, and everything to do with who else is peeking in. This might sound paranoid, and it's still hypothetical. It's a truism someone is always watching on the internet, but the worst thing that comes of it for many people is some unwanted targeted ads. Right now in the US, though, we're watching the impending collision of two alarming trends. In one, tech executives are encouraging people to reveal ever more intimate details to AI tools, soliciting things users wouldn't put on social media and may not even tell their closest friends. In the other, the government is obsessed with obtaining a nearly unprecedented level of surveillance and control over residents' minds: their gender identities, their possible neurodivergence, their opinions on racism and genocide. And it's pursuing this war by seeking and weaponizing ever-increasing amounts of information with little regard for legal or ethical restraints. A few data points: As this is happening, US residents are being urged to discuss their mental health conditions and personal beliefs with chatbots, and their simplest and best-known options are platforms whose owners are cozy with the Trump administration. xAI and Grok are owned by Musk, who is literally a government employee. Zuckerberg and OpenAI CEO Sam Altman, meanwhile, have been working hard to get in Trump's good graces -- Zuckerberg to avoid regulation of his social networks, Altman to support his efforts for ever-expanding energy infrastructure and no state AI regulation. (Gemini AI operator Google is also carefully sycophantic. It's just a little quieter about it.) These companies aren't simply doing standard lobbying, they're sometimes throwing their weight behind Trump in exceptionally high-profile ways, including changing their policies to fit his ideological preferences and attending his inauguration as prominent guests. The internet has been a surveillance nightmare for decades. But this is the setup for a stupidly on-the-nose dystopia whose pieces are disquietingly slotting into place. It's (hopefully) common knowledge that things like web searches and AI chat logs can be requested by law enforcement with a valid warrant for use in specific investigations. We also know the government has extensive, long-standing mass surveillance capabilities -- including the National Security Agency programs revealed by Edward Snowden, as well as smaller-scale strategies like social media searches and cell tower dumps. The past few months have seen a sharp escalation in the risks and scope of this. The Trump administration's surveillance crusade is vast and almost unbelievably petty. It's aimed at a much broader range of targets than even the typical US national security and policing apparatus. And it has seemingly little interest in keeping that surveillance secret or even low-profile. Chatbots, likewise, escalate the risks of typical online secret-sharing. Their conversational design can draw out private information in a format that can be more vivid and revealing -- and, if exposed, embarrassing -- than even something like a Google search. There's no simple equivalent to a private iMessage or WhatsApp chat with a friend, which can be encrypted to make snooping harder. (Chatbot logs can use encryption, but especially on major platforms, this typically doesn't hide what you're doing from the company itself.) They're built, for safety purposes, to sense when a user is discussing sensitive topics like suicide and sex. During the Bush and Obama administrations, the NSA demanded unfettered access to American telephone providers' call records. The Trump administration is singularly fascinated by AI, and it's easy to imagine one of its agencies demanding a system for easily grabbing chat logs without a warrant or having certain topics of discussion flagged. They could get access by invoking the government's broad national security powers or by simply threatening the CEO. For users whose chats veer toward the wrong topics, this surveillance could lead to any number of things: a visit from child protective services or immigration agents, a lengthy investigation into their company's "illegal DEI" rules or their nonprofit's tax-exempt status, or embarrassing conversations leaked to a right-wing activist for public shaming. Like the NSA's anti-terrorism programs, the data-sharing could be framed in wholesome, prosocial ways. A 14-year-old wonders if they might be transgender, or a woman seeks support for an abortion? Of course OpenAI would help flag that -- they're just protecting children. A foreign student who's emotionally overwhelmed by the war in Gaza -- what kind of monster would shield a supporter of Hamas? An Instagram user asking for advice about their autism -- doesn't Meta want to help find a cure? There are special risks for people who already have a target on their backs -- not just those who have sought the political spotlight, but medical professionals who work with reproductive health and gender-affirming care, employees of universities, or anyone who could be associated with something "woke." The government is already scouring publicly available information for ways to discredit enemies, and a therapy chatbot with minimal privacy protections would be an almost irresistible target. Even if you're one of the few American citizens with truly nothing to hide in your public or private life, we're not talking about an administration known for laser-guided accuracy here. Trump officials are notorious for governing through bizarrely blunt keyword searches that appear to confuse "transgenic" with "transgender" and assume someone named Green must do green energy. They reflexively double down on admitted mistakes. You're one fly in a typewriter away from everybody else. In an ideal world, companies would resist indiscriminate data-sharing because it's bad business. But they might suspect that many people will have no idea it's happening, will believe facile claims about fighting terrorism and protecting children, or will have so much learned helplessness around privacy that they don't care. The companies could assume people will conclude there's no alternative, since competitors are likely doing the same thing. If AI companies are genuinely dedicated to building trustworthy services for therapy, they could commit to raising the privacy and security bar for bots that people use to discuss sensitive topics. They could focus on meeting compliance standards for the Health Insurance Portability and Accountability Act (HIPAA) or on designing systems whose logs are encrypted in a way that they can't access, so there's nothing to turn over. But whatever they do right now, it's undercut by their ongoing support for an administration that holds contempt for the civil liberties people rely on to freely share their thoughts, including with a chatbot. Contacted for comment on its policy for responding to government data requests and whether it was considering heightened protection for therapy bots, Meta instead emphasized its services' good intentions. "Meta's AIs are intended to be entertaining and useful for users ... Our AIs aren't licensed professionals and our models are trained to direct users to seek qualified medical or safety professionals when appropriate," said Meta spokesperson Ryan Daniels. OpenAI spokesperson Lindsey Held told The Verge that "in response to a law enforcement request, OpenAI will only disclose user data when required to do so [through] a valid legal process, or if we believe there is an emergency involving a danger of death or serious injury to a person." (xAI didn't respond to a request for comment, and Google didn't relay a statement by press time.) Fortunately, there's no evidence mass chatbot surveillance has happened at this point. But things that would have sounded like paranoid delusions a year ago -- imprisoning a student for writing an op-ed, letting an inexperienced Elon Musk fanboy modify US treasury payment systems, accidentally inviting a magazine editor to a secret groupchat for planning military airstrikes -- are part of a standard news day now. The private and personal nature of chatbots makes them a massive, emerging privacy threat that should be identified as soon and as loudly as possible. At a certain point, it's delusional not to be paranoid. The obvious takeaway from this is "don't get therapy from a chatbot, especially not from a high-profile platform, especially if you're in the US, especially not right now." The more important takeaway is that if chatbot makers are going to ask users to divulge their greatest vulnerabilities, they should do so with the kinds of privacy protections medical professionals are required to adhere to, in a world where the government seems likely to respect that privacy. Instead, while claiming they're trying to help their users, CEOs like Zuckerberg are throwing their power behind a group of people often trying to harm them -- and building new tools to make it easier.
Share
Share
Copy Link
As AI chatbot companions gain popularity, researchers explore their impact on mental health while privacy advocates warn of potential surveillance risks.
AI chatbot companions have become increasingly popular, with over half a billion people worldwide downloading products like Xiaoice and Replika 1. These virtual companions are designed to provide empathy, emotional support, and even deep relationships. The advent of large language models (LLMs) has significantly improved their ability to mimic human interaction 1.
Researchers are studying the impact of AI companions on mental health. Early results suggest potential benefits, particularly for individuals experiencing isolation or social difficulties. Jaime Banks, a human-communications researcher at Syracuse University, found that many users form deep emotional connections with their AI companions, even while understanding they are not real 1.
Users can often customize their AI companions, selecting personality traits, appearances, and even relationship types. Some apps offer paid options for more extensive customization. Companies behind these chatbots employ techniques to increase user engagement and foster emotional connections 1.
While AI companions may offer support, some researchers express concerns about potential risks:
The increasing use of AI chatbots for personal and mental health support coincides with concerns about government surveillance:
Despite concerns, many researchers believe AI companionship will become more prevalent. Mark Zuckerberg, CEO of Meta, envisions a future where AI tools provide personalized support, potentially serving as alternatives to human therapists for some individuals 2.
The rapid growth of AI companions presents new challenges for regulators and ethicists:
As AI chatbot companions continue to evolve, balancing their potential benefits with privacy concerns and ethical considerations will be essential for responsible development and use of this technology.
Reference
[1]
A comprehensive look at the growing popularity of AI companions, their impact on users' mental health, and the potential risks, especially for younger users. The story explores research findings, expert opinions, and calls for regulation.
7 Sources
7 Sources
AI companion apps are gaining popularity as emotional support tools, but their rapid growth raises concerns about addiction, mental health impacts, and ethical implications.
3 Sources
3 Sources
The American Psychological Association warns about the dangers of AI chatbots masquerading as therapists, citing cases of harm to vulnerable users and calling for regulatory action.
4 Sources
4 Sources
A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.
40 Sources
40 Sources
Recent investigations reveal alarming instances of AI chatbots being used for potentially harmful purposes, including grooming behaviors and providing information on illegal activities, raising serious ethical and safety concerns.
2 Sources
2 Sources