2 Sources
2 Sources
[1]
Should you use ChatGPT as a therapist? Tool raises safety concerns among psychology experts
Sharing how you're feeling can be frightening. Friends and family can judge, and therapists can be expensive and hard to come by, which is why some people are turning to ChatGPT for help with their mental health. While some credit the AI service with saving their life, others say the lack of regulation around it can pose dangers. Psychology experts from Northeastern said there are safety and privacy issues posed by someone opening up to artificial intelligence chatbots like ChatGPT. "AI is really exciting as a new tool that has a lot of promise, and I think there's going to be a lot of applications for psychological service delivery," says Jessica Hoffman, a professor of applied psychology at Northeastern University. "It's exciting to see how things are unfolding and to explore the potential for supporting psychologists and mental health providers in our work. "But when I think about the current state of affairs, I have significant concerns about the limits of ChatGPT for providing psychological services. There are real safety concerns that people need to be aware of. ChatGPT is not a trained therapist. It doesn't abide by the legal and ethical obligations that mental health service providers are working with. I have concerns about safety and people's well-being when they're turning to ChatGPT as their sole provider." The cons It's easy to see the appeal of confiding in a chatbot. Northeastern experts say therapists can be costly and it's difficult to find one. "There's a shortage of professionals," Hoffman says. "There are barriers with insurance. There are real issues in rural areas where there's even more of a shortage. It does make it easier to be able to just reach out to the computer and get some support." Chatbots can also serve as a listening ear. "People are lonely," says Josephine Au, an assistant clinical professor of applied psychology at Northeastern University. "People are not just turning to (general purpose generative AI tools like) ChatGPT for therapy. They're also looking for companionship, so sometimes it just naturally evolves into a therapy-like conversation. Other times they use these tools more explicitly as a substitute for therapy." However, Au says these forms of artificial intelligence are not designed to be therapeutic. In fact, these models are often set up to validate the user's thoughts, a problem that poses a serious risk for those dealing with delusions or suicidal thoughts. There have been cases of people who died by suicide after getting guidance on how to do so from AI chatbots, one of which prompted a lawsuit. There are also increasing reports of hospitalizations due to "AI psychosis," where people have mental health episodes triggered by these chatbots. OpenAI added more guardrails to ChatGPT after finding it was encouraging unhealthy behavior. The American Psychological Association warned against using AI chatbots for mental health support. Research from Northeastern found that people can bypass the language model's guardrails and use it to get details on how to harm themselves or even die by suicide. "I don't think it's a good idea at all for people to rely on non-therapeutic platforms as a form of therapy," Au says. "We're talking about interactive tools that are designed to be agreeable and validating. There are risks to like what kind of data is generated through that kind of conversation pattern. A lot of the LLM tools are designed to be agreeable and can reinforce some problematic beliefs about oneself." This is especially pertinent when it comes to diagnosis. Au says people might think they have a certain condition, ask ChatGPT about it, and get a "diagnosis" from their own self-reported symptoms thanks to the way the model works. But Northeastern experts say a number of factors go into getting a diagnosis, such as examining a patient's body language and looking at their life more holistically as they develop a relationship with a patient. These are things AI cannot do. "It feels like a slippery slope," says Joshua Curtiss, an assistant professor of applied psychology at Northeastern University. "If I tell ChatGPT I have five of these nine depression symptoms and it will sort of say, 'OK, sounds like you have depression' and end there. "What the human diagnostician would do is a structured clinical assessment. They'll ask lots of follow-up questions about examples to support (you've had) each symptom for the time criteria that you're supposed to have it to, and that the aggregate of all these symptoms falls underneath a certain mental health disorder. "The clinician might ask the patient to provide examples (to) justify the fact that this is having a severe level of interference in your life, like how many hours out of your job is it taking? That human element might not necessarily be entrenched in the generative AI mindset." Then there are the privacy concerns. Clinicians are bound by HIPAA, but chatbots don't have the same restrictions when it comes to protecting the personal information people might share with it. OpenAI CEO Sam Altman said there is no legal confidentiality for people using ChatGPT. "The guardrails are not secure for the kind of sensitive information that's being revealed," Hoffman says of people using AI as therapists. "People need to recognize where their information is going and what's going to happen to that information. "Something that I'm very aware of as I think about training psychologists at Northeastern is really making sure that students are aware of the sensitive information they're going to be getting as they work with people, and making sure that they don't put that in any of that information into ChatGPT because you just don't know where that information is going to go. We really have to be very aware of how we're training our students to use ChatGPT. This is like a really big issue in the practice of psychology." The pros While artificial intelligence poses risk when being used by patients, Northeastern experts say certain models could be helpful to clinicians when trained the right way and with the proper privacy safeguards in place. Curtiss, a member of Northeastern's Institute for Cognitive and Brain Health, says he has done a lot of work with artificial intelligence, specifically machine learning. He has research out now that found that these types of models can be used to help predict treatment outcomes when it comes to certain mental health disorders. "I use machine learning a lot with predictive modeling, where the user has more say in what's going on as opposed to large language models like the common ones we're all using," Curtiss says. Northeastern's Institute for Cognitive and Brain Health is partnering with experiential AI partners to see if they can develop therapeutic tools. Hoffman says she also sees the potential for clinicians to use artificial intelligence where appropriate in order to improve their practice. "It could be helpful for assessment," Hoffman says. "It could be a helpful tool that clinicians use to help with intakes and with assessment to help guide more personalized plans for therapy. But it's not automatic. It needs to have the trained clinician providing oversight and it needs to be done on a safe, secure platform." For patients, Northeastern experts say there are some positive uses of chatbots that don't require using them as a therapist. For example, Au says these tools can help people summarize their thoughts or come up with ways to continue certain practices their clinicians suggest for their health. Hoffman suggests it could also be a way for people to connect with providers. But overall, experts say it's better to find a therapist than lean on chatbots not designed to serve as therapeutic tools. "I have a lot of hopes, even though I also have a lot of worries," Au says. "The leading agents in commercialization of and monetization of mental health care tools are people, primarily people in tech, venture capitalists and researchers who lack clinical experience and not practicing clinicians who understand what psychotherapy is as well as patients. There are users who claim that these tools have been really helpful for them (to) reduce the sense of isolation and loneliness. I remain skeptical about the authenticity of these because some of this could be driven by money."
[2]
Why are millions turning to general purpose AI for mental health? As Headspace's chief clinical officer, I see the answer every day | Fortune
Today, more than half (52%) of young adults in the U.S. say they would feel comfortable discussing their mental health with an AI chatbot. At the same time, concerns about AI-fueled psychosis are flooding the internet, paired with alarming headlines and heartbreaking accounts of people spiraling after emotionally charged conversations with general purpose chatbots like ChatGPT. Clinically, psychosis isn't one diagnosis. It's a cluster of symptoms like delusions, hallucinations, or disorganized thinking that can show up across many conditions. Delusions, specifically, are fixed false beliefs. When AI responds with agreement instead of grounding, it can escalate these types of symptoms rather than ease them. It's tempting to dismiss these incidents as outliers. Zooming out, a larger question comes into focus: What happens when tools being used by hundreds of millions of people for emotional support are designed to maximize engagement, not to protect wellbeing? What we're seeing is a pattern: people in vulnerable states turning to AI for comfort and coming away confused, distressed, or unmoored from reality. We've seen this pattern before. From Feeds to Conversations Social media began with the promise of connection and belonging - but it didn't take long before we saw the fallout with spikes in anxiety, depression, loneliness, and body image issues, especially among young people. Not because platforms like Instagram and Facebook were malicious, but because they were designed to be addictive and keep users engaged. Now, AI is following that same trajectory with even greater intimacy. Social media gave us feeds. Generative AI gives us conversation. General purpose chatbots don't simply show us content. They mirror our thoughts, mimic empathy, and respond immediately. This responsiveness can feel affirming, but it can also validate distorted beliefs. Picture walking into a dark basement. Most of us get a brief chill and shake it off. For someone already on edge, that moment can spiral. Now imagine turning to a chatbot and hearing: "Maybe there is something down there. Want to look together?" That's not support, that's escalation. General purpose chatbots weren't trained to be clinically sound when the stakes are high, and they don't know when to stop. The Engagement Trap Both social media apps and general purpose chatbots are built on the same engine: engagement. The more time you spend in conversation, the better the metrics look. When engagement is the north star, safety and wellbeing take a backseat. With online newsfeeds, that meant algorithms prioritizing posts with more anger-provoking content, or posts that drive comparisons of beauty, wealth or success. With chatbots, it means endless dialogue that can unintentionally reinforce paranoia, delusions, or despair. Just as we saw with the rise of social media, creating industry-wide guardrails for AI is a complex process. Over the past 10 years, social media giants tried to manage young people's use of specific apps like Instagram and Facebook by introducing parental controls, only to see the rise of fake accounts like "finstas" as secondary profiles used to bypass oversight. We'll likely see a similar workaround with ChatGPT. Many young people will likely begin creating ChatGPT accounts that are disconnected from their parents, giving them private, unsupervised access to powerful tools. This underscores a key lesson from the social media era: controls alone aren't enough if they don't align with how young people actually engage with technology. As OpenAI introduces proposed parental controls this month, we must acknowledge that privacy-seeking behaviors are developmentally typical and design systems that build trust and transparency with youth themselves - not just their guardians. The open nature of the internet compounds the problem. Once an open-weight model is released, it circulates indefinitely, with safeguards stripped away in a few clicks. Meanwhile, adoption is outpacing oversight. Millions of people are already relying on these tools, while lawmakers and regulators are still debating basic standards protections. This gap between innovation and accountability is where the greatest risks lie. It's important to recognize why millions are turning to AI in the first place, and it's partially because our current mental health system isn't meeting their needs. Therapy remains the default, and it's too often expensive, too hard to access, or buried in stigma. AI, on the other hand, is instant. It's nonjudgmental. It feels private, even when it's not. That accessibility is part of the opportunity, but also part of the danger. To meet this demand responsibly, we need widely available, purpose-built AI for mental health - tools designed by clinicians, grounded in evidence, and transparent about their limits. For example, plain-language disclosures about what a tool is for and what it's not. Is it for skill-building? For stress management? Or is it attempting to appear therapeutic? Responsible AI for mental health has to be more than helpful; it needs to be safe by providing clear usage boundaries, clinically informed scripting, and built-in protocols for escalation - not just endless empathy on demand. We've already lived through one digital experiment without clear standards. We know the cost of chasing attention over health. With AI, the standard has to be different. AI holds real promise in supporting everyday mental health needs, and helping people manage stress, ease anxiety, process emotions, and prepare for difficult conversations - but its potential will only be realized if industry leaders, policymakers, and clinicians work together to establish guardrails from the start. Untreated mental health issues cost the U.S. an estimated $282 billion annually, while burnout costs employers thousands of dollars per employee each year. By prioritizing accountability, transparency, and user wellbeing, we have the opportunity to not just avoid repeating the mistakes of social media, but to build AI tools that strengthen resilience, reduce economic strain, and allow people to live healthier, connected lives.
Share
Share
Copy Link
As AI chatbots gain popularity for mental health support, experts warn of potential risks and ethical concerns. The trend highlights the need for responsible AI development in healthcare.
As mental health services become increasingly difficult to access due to shortages and cost barriers, many are turning to AI chatbots like ChatGPT for emotional support. A recent survey reveals that 52% of young adults in the U.S. would feel comfortable discussing their mental health with an AI chatbot
2
. The appeal is clear: AI offers instant, non-judgmental, and seemingly private interactions1
.However, this trend has raised significant concerns among psychology experts. Jessica Hoffman, a professor of applied psychology at Northeastern University, warns of "real safety concerns" when people use ChatGPT as their sole mental health provider . Unlike trained therapists, AI chatbots are not bound by legal and ethical obligations, potentially compromising user safety and well-being.
One of the primary concerns is the potential for AI to reinforce harmful thoughts or behaviors. Josephine Au, an assistant clinical professor at Northeastern, points out that AI models are often designed to validate users' thoughts, which can be dangerous for individuals dealing with delusions or suicidal ideation .
There have been alarming reports of suicide attempts and hospitalizations due to "AI psychosis" triggered by interactions with chatbots. These incidents highlight the risks of relying on non-therapeutic platforms for mental health support
2
.Experts emphasize that AI chatbots lack the nuanced understanding required for accurate mental health diagnoses. Joshua Curtiss, an assistant professor of applied psychology, explains that human diagnosticians consider various factors, including body language and holistic life assessment, which AI cannot replicate .
Unlike human clinicians bound by HIPAA regulations, AI chatbots do not have the same privacy restrictions. This raises concerns about the security and confidentiality of sensitive mental health information shared with these platforms .
Related Stories
Experts argue for the development of purpose-built AI for mental health, designed by clinicians and grounded in evidence. These tools should be transparent about their limitations and include safeguards to protect users' well-being
2
.As AI continues to evolve, it's crucial to strike a balance between innovation and accountability in mental health care. While AI has the potential to improve access to mental health support, it must be developed and implemented responsibly to ensure user safety and well-being.
Summarized by
Navi
[1]
1
Business and Economy
2
Policy and Regulation
3
Technology