7 Sources
7 Sources
[1]
Can AI chatbots trigger psychosis? What the science says
Accounts of people developing psychosis -- which renders them unable to distinguish between what is and is not reality -- after interacting with generative artificial intelligence (AI) chatbots have increased in the past few months. At least 17 people have been reported to have developed psychosis, according to a preprint posted online last month. After engaging with chatbots such as ChatGPT and Microsoft Copilot, some of these people experienced spiritual awakenings or uncovered what they thought were conspiracies. So far, there has been little research into this rare phenomenon, called AI psychosis, and most of what we know comes from individual instances. Nature explores the emerging theories and evidence, and what AI companies are doing about the problem. Psychosis is characterized by disruptions to how a person thinks and perceives reality, including hallucinations, delusions or false beliefs. It can be triggered by brain disorders such as schizophrenia and bipolar disorder, severe stress or drug use. That AI can trigger psychosis is still a hypothesis, says Søren Østergaard, a psychiatrist at Aarhus University in Denmark. But theories are emerging about how this could happen, he adds. For instance, chatbots are designed to craft positive, human-like responses to prompts from users, which could increase the risk of psychosis among people already having trouble distinguishing between what is and is not real, says Østergaard. UK researchers have proposed that conversations with chatbots can fall into a feedback loop, in which the AI reinforces paranoid or delusional beliefs mentioned by users, which condition the chatbot's responses as the conversation continues. In a preprint published in July, which has not been peer reviewed, the scientists simulated user-chatbot conversations using prompts with varying levels of paranoia, finding that the user and chatbot reinforced each other's paranoid beliefs. Studies involving people without mental-health conditions or tendencies towards paranoid thinking are needed to establish whether there is a connection between psychosis and chatbot use, Østergaard says. People who have already experienced some kind of mental-health issue are at the greatest risk of developing psychosis, Østergaard says. It seems that some people can experience their first psychotic break from interacting with chatbots, he adds, but most of them will already be susceptible to developing delusions or paranoia owing to genetics, stress or misuse of drugs or alcohol. Østergaard also theorizes that chatbots could worsen or trigger mania, a period of extremely elevated energy and mood associated with bipolar disorder, because they reinforce symptoms such as elated mood. People who are isolated and do not interact with friends, family or other people are also at risk, says Kiley Seymour, a neuroscientist at the University of Technology Sydney in Australia. Interacting with other people protects against psychosis, she adds, because "they can offer those counterfactual pieces of evidence to help you think about how you're thinking". But the risk of developing psychosis for people without a predisposition is the same whether they do or don't interact with chatbots, adds Seymour. Chatbots can remember information from conversations that occurred months earlier, which can trigger users to think that they are "being watched or that their thoughts are being extracted, because they can't remember ever sharing that information", says Seymour. Grandiose delusions, in which users think they are speaking to a god through the chatbot or have discovered a truth about the world, can also be reinforced, she adds. In an analysis of chats posted online, the Wall Street Journal reported finding dozens of instances in which chatbots validated mystical or delusional beliefs or made claims that they were in contact with extraterrestrial beings. Some delusions are not unique to AI and are associated with new technology, says Anthony Harris, a psychiatrist at the Westmead Institute for Medical Research in Sydney. For instance, some people have developed beliefs that they are being manipulated by a computer chip inserted into their brain by the US Central Intelligence Agency or another bad actor, he adds. Some cases of AI psychosis have been linked to an updated version of ChatGPT released on 25 April, which made the model more sycophantic. In response to reports of cases of psychosis, OpenAI began rolling back the update on 28 April -- although this version was again made available last month for paying users. The firm also announced it was working on an update that would make ChatGPT de-escalate conversations that are not grounded in reality, and has hired a clinical psychiatrist to help study the effects of the company's products on users' mental health. A spokesperson for OpenAI told Nature that they are working to ensure ChatGPT responds with care when people engage with the model "in sensitive moments". "ChatGPT's default model provides more helpful and reliable responses in these contexts," they added. Protections for teens and interventions for people in crisis will be strengthened and expanded next, they said. A spokesperson for Character Technologies the company in Menlo Park, California, behind Character.AI said the company has continued to develop safety features, including resources for self-harm and features specifically for the safety of minors. The company has also said it will change its Character.AI models to reduce the risk of users aged 18 or younger encountering "sensitive or suggestive content", and that users will be notified when they have spent an hour continuously on the platform. Last month, Anthropic announced that it had given its chatbot Claude the ability to stop conversations if users resist the model's attempts to redirect the conversation away from harmful or distressing topics.
[2]
AI Psychosis Is Rarely Psychosis at All
A wave of AI users presenting in states of psychological distress gave birth to an unofficial diagnostic label. Experts say it's neither accurate nor needed, but concede that it's likely to stay. A new trend is emerging in psychiatric hospitals. People in crisis are arriving with false, sometimes dangerous beliefs, grandiose delusions, and paranoid thoughts. A common thread connects them: marathon conversations with AI chatbots. WIRED spoke with more than a dozen psychiatrists and researchers, who are increasingly concerned. In San Francisco, UCSF psychiatrist Keith Sakata says he has counted a dozen cases severe enough to warrant hospitalization this year, cases in which artificial intelligence "played a significant role in their psychotic episodes." As this situation unfolds, a catchier definition has taken off in the headlines: "AI psychosis." Some patients insist the bots are sentient or spin new grand theories of physics. Other physicians tell of patients locked in days of back-and-forth with the tools, arriving at the hospital with thousands upon thousands of pages of transcripts detailing how the bots had supported or reinforced obviously problematic thoughts. Reports like this are piling up, and the consequences are brutal. Distressed users and family and friends have described spirals that led to lost jobs, ruptured relationships, involuntary hospital admissions, jail time, and even death. Yet clinicians tell WIRED the medical community is split. Is this a distinct phenomenon that deserves its own label, or a familiar problem with a modern trigger? AI psychosis is not a recognized clinical label. Still, the phrase has spread in news reports and on social media as a catchall descriptor for some kind of mental health crisis following prolonged chatbot conversations. Even industry leaders invoke it to discuss the many emerging mental health problems linked to AI. At Microsoft, Mustafa Suleyman, CEO of the tech giant's AI division, warned in a blog post last month of the "psychosis risk." Sakata says he is pragmatic and uses the phrase with people who already do. "It's useful as shorthand for discussing a real phenomenon," says the psychiatrist. However, he is quick to add that the term "can be misleading" and "risks oversimplifying complex psychiatric symptoms." That oversimplification is exactly what concerns many of the psychiatrists beginning to grapple with the problem. Psychosis is characterized as a departure from reality. In clinical practice, it is not an illness but a complex "constellation of symptoms including hallucinations, thought disorder, and cognitive difficulties," says James MacCabe, a professor in the Department of Psychosis Studies at King's College London. It is often associated with health conditions like schizophrenia and bipolar disorder, though episodes can be triggered by a wide array of factors, including extreme stress, substance use, and sleep deprivation. But according to MacCabe, case reports of AI psychosis almost exclusively focus on delusions -- strongly held but false beliefs that cannot be shaken by contradictory evidence. While acknowledging some cases may meet the criteria for a psychotic episode, MacCabe says "there is no evidence" that AI has any influence on the other features of psychosis. "It is only the delusions that are affected by their interaction with AI." Other patients reporting mental health issues after engaging with chatbots, MacCabe notes, exhibit delusions without any other features of psychosis, a condition called delusional disorder.
[3]
What Is AI Psychosis? Everything You Need to Know About the Risk of Chatbot Echo Chambers
Scroll through TikTok or X and you'll see videos of people claiming artificial intelligence chatbots told them to stop taking medication, that they're being targeted by the FBI or that they're mourning the "death" of an AI companion. These stories have pushed the phrase AI psychosis into mainstream discussion, raising fears that chatbots could be driving people mad. The term has quickly become a catchall explanation for extreme behavior tied to chatbots, but it's not a clinical diagnosis. Psychosis itself is a set of symptoms like delusions, hallucinations and a break from reality, rooted in biology and environment. "The term can be misleading because AI psychosis is not a clinical term," Rachel Wood, a licensed therapist with a doctoral degree in cyberpsychology, tells CNET. What generative AI can do is amplify delusions in people who are already vulnerable. By design, chatbots validate and extend conversations or even lie rather than push back against what they think you want to hear. But the progress in making these systems more powerful and capable has outpaced the knowledge of how to make them safer. Because generative AI sometimes hallucinates, this can deepen the problem when it's combined with its sychophantic design (AI's tendency to agree with and flatter the user, often at the expense of being truthful or factually accurate). When people online talk about AI psychosis, they usually mean delusional or obsessive behavior tied to chatbot use. Some people believe AI has become conscious, that it is divine or that it offers secret knowledge. Those cases are described in studies, medical reports and many news stories. Other people have formed intense attachments to AI companions, like those that platform Character AI offers, spiraling when the bots change or shut down. But these patterns aren't examples of AI creating psychosis from nothing. They are cases where the technology strengthens existing vulnerabilities. The longer someone engages in sycophantic, looping exchanges with a chatbot, the more those conversations blur the boundaries with reality. "Chatbots can act as a feedback loop that affirms the user's perspective and ideas," Wood tells CNET. Because many are designed to validate and encourage users, even far-fetched ideas get affirmed instead of challenged. That dynamic can push someone already prone to delusion even further. "When users disconnect from receiving feedback on these types of beliefs with others, it can contribute to a break from reality," Wood says. Clinicians point out that psychosis existed long before chatbots. Research so far suggests that people with diagnosed psychotic disorders may be at higher risk of harmful effects, while de novo cases -- psychosis emerging without earlier signs -- haven't been documented. Experts I spoke with and a recent study on AI and psychosis also emphasize that there's no evidence that AI directly induces psychosis. Instead, generative AI simply gives new form to old patterns. A person already prone to paranoia, isolation or detachment may interpret a bot's polished responses as confirmation of their beliefs. In those situations, AI can become a substitute for human interaction and feedback, increasing the chance that delusional ideas go unchallenged. "The central problematic behavior is the mirroring and reinforcing behavior of instruction following AI chatbots that lead them to be echo chambers," Derrick Hull, clinical R&D lead at Slingshot AI, tells CNET. But he adds that AI doesn't have to be this way. People naturally anthropomorphize conversational systems, attributing human emotions or consciousness and sometimes treating them like real relationships, which can make interactions feel personal or intentional. For individuals already struggling with isolation, anxiety or untreated mental illness, that mix can act as a trigger. Wood also notes that accuracy in AI models tends to decrease during long exchanges, which can blur boundaries further. Extended threads make chatbots more likely to wander into ungrounded territory, she explains, and that can contribute to a break from reality when people stop testing their beliefs with others. We're likely approaching a time when doctors will ask about AI use just as they ask about habits like drinking or smoking. Online communities also play a role. Viral posts and forums can validate extreme interpretations, making it harder for someone to recognize when a chatbot is simply wrong. Tech companies are working to curb hallucinations. This may help reduce harmful outputs, but it doesn't erase the risk of misinterpretation. Features like memory or follow-up prompts can mimic agreement and make delusions feel validated. Detecting them is difficult because many delusions resemble ordinary cultural or spiritual beliefs, which can't be flagged through language analysis alone. Researchers call for greater clinician awareness and AI-integrated safety planning. They suggest "digital safety plans" co-created by patients, care teams and the AI systems they use, similar to relapse prevention tools or psychiatric directives, but adapted to guide how chatbots respond during early signs of relapse. Red flags to pay attention to are secretive chatbot use, distress when the AI is unavailable, withdrawal from friends and family, and difficulty distinguishing AI responses from reality. Spotting these signs early can help families and clinicians intervene before dependence deepens. For everyday users, the best defense is awareness. Treat AI chatbots as assistants, not know-it-all prophets. Double-check surprising claims, ask for sources and compare answers across different tools. If a bot gives advice about mental health, law or finances, confirm it with a trusted professional before acting. Wood points to safeguards like clear reminders of non-personhood, crisis protocols, limits on interactions for minors and stronger privacy standards as necessary baselines. "It's helpful for chatbots to champion the agency and critical thinking of the user instead of creating a dependency based on advice giving," Wood says. As one of the biggest concerns about the intersection of AI and mental health, Wood sees the lack of AI literacy. "By that, I mean the general public needs to be informed regarding AI's limitations. I think one of the biggest issues is not whether AI will ever be conscious, but how people behave when they believe it already is," Wood explains. Chatbots don't think, feel or know. They're designed to generate likely-sounding text. "Large general-purpose models are not good at everything, and they are not designed to support mental health, so we need to be more discerning of what we use them for," Hull says. AI's ability to model therapeutic dialogue and offer 24/7 companionship sounds appealing. A nonjudgmental partner can provide social support for those who might otherwise be isolated or lonely, and round-the-clock access means help could be available in moments when a human therapist is sound asleep in the middle of the night. But AI models aren't built to spot early signs of psychosis. Despite the risks, AI could still support mental health if built with care. Possible uses include reflective journaling, cognitive reframing, role-playing social interactions and practicing coping strategies. Rather than replacing human relationships or therapy, AI could act as a supplement, providing accessible support in between professional care. Hull points to Slingshot's Ash, an AI therapy tool built on a psychology-focused foundation model trained on clinical data and fine-tuned by clinicians. Until safeguards and AI literacy improve, the responsibility lies with you to question what AI's telling you, and to recognize when reliance on AI starts crossing into harmful territory. We must remember that human support, not artificial conversation, is what keeps us tethered to reality.
[4]
How chatbots are enabling AI psychosis
The explosive growth of AI chatbots in the past three years, since ChatGPT launched in 2022, has started to have some really noticeable, profound, and honestly disturbing effects on some users. There's a lot to unpack there -- it can be pretty complicated. So I'm very excited to talk with today's guest, New York Times reporter Kashmir Hill, who has spent the past year writing thought-provoking features about the ways chatbots can affect our mental health. One of Kashmir's recent stories was about a teenager, Adam Raine, who died by suicide in April. After his death, his family was shocked to discover that he'd been confiding deeply in ChatGPT for months. They were also pretty surprised to find, in the transcripts, a number of times that ChatGPT seemed to guide him away from telling his loved ones. And it's not just ChatGPT: Several families have filed wrongful death suits against Character AI, alleging that a lack of safety protocols on the company's chatbots contributed to their teenage kids' deaths by suicide. Then there are the AI-induced delusions. You'll hear us talk about this at length, but pretty much every tech and AI reporter -- honestly, maybe every reporter, period -- has seen an uptick in the past year of people writing in with some grand or disturbing discovery that they say ChatGPT sparked. Sometimes these emails can be pretty disturbing. And as you'll hear Kashmir explain, plenty of the people who get into these delusional spirals didn't seem to suffer from mental illness in the past. It's not surprising that a lot of people want somebody to do something about it, but the who and the how are hard questions. Regulation of any kind seems to be pretty much off the table right now -- we'll see -- so that leaves the companies themselves. You'll hear us touch on this a bit, but not long after we recorded this conversation, OpenAI CEO Sam Altman wrote a blog post about new features that would theoretically, and eventually, identify users' ages and stop ChatGPT from discussing suicide with teens. But as you'll hear us discuss, it seems like a big open question if those guardrails will actually work, how they'll be developed, and when we'll see them come to pass. If you'd like to read more on what we talked about in this episode, check out the links below:
[5]
What Is AI Pyschosis? Everything You Need to Know About the Risk of Chatbot Echo Chambers
Scroll through TikTok or X and you'll see videos of people claiming artificial intelligence chatbots told them to stop taking medication, that they're being targeted by the FBI or that they're mourning the "death" of an AI companion. These stories have pushed the phrase AI psychosis into mainstream discussion, raising fears that chatbots could be driving people mad. The term has quickly become a catchall explanation for extreme behavior tied to chatbots, but it's not a clinical diagnosis. Psychosis itself is a set of symptoms like delusions, hallucinations and a break from reality, rooted in biology and environment. "The term can be misleading because AI psychosis is not a clinical term," Rachel Wood, a licensed therapist with a doctoral degree in cyberpsychology, tells CNET. What generative AI can do is amplify delusions in people who are already vulnerable. By design, chatbots validate and extend conversations or even lie rather than push back against what they think you want to hear. But the progress in making these systems more powerful and capable has outpaced the knowledge of how to make them safer. Because generative AI sometimes hallucinates, this can deepen the problem when it's combined with its sychophantic design (AI's tendency to agree with and flatter the user, often at the expense of being truthful or factually accurate). When people online talk about AI psychosis, they usually mean delusional or obsessive behavior tied to chatbot use. Some people believe AI has become conscious, that it is divine or that it offers secret knowledge. Those cases are described in studies, medical reports and many news stories. Other people have formed intense attachments to AI companions, like those that platform Character AI offers, spiraling when the bots change or shut down. But these patterns aren't examples of AI creating psychosis from nothing. They are cases where the technology strengthens existing vulnerabilities. The longer someone engages in sycophantic, looping exchanges with a chatbot, the more those conversations blur the boundaries with reality. "Chatbots can act as a feedback loop that affirms the user's perspective and ideas," Wood tells CNET. Because many are designed to validate and encourage users, even far-fetched ideas get affirmed instead of challenged. That dynamic can push someone already prone to delusion even further. "When users disconnect from receiving feedback on these types of beliefs with others, it can contribute to a break from reality," Wood says. Clinicians point out that psychosis existed long before chatbots. Research so far suggests that people with diagnosed psychotic disorders may be at higher risk of harmful effects, while de novo cases -- psychosis emerging without earlier signs -- haven't been documented. Experts I spoke with and a recent study on AI and psychosis also emphasize that there's no evidence that AI directly induces psychosis. Instead, generative AI simply gives new form to old patterns. A person already prone to paranoia, isolation or detachment may interpret a bot's polished responses as confirmation of their beliefs. In those situations, AI can become a substitute for human interaction and feedback, increasing the chance that delusional ideas go unchallenged. "The central problematic behavior is the mirroring and reinforcing behavior of instruction following AI chatbots that lead them to be echo chambers," Derrick Hull, clinical R&D lead at Slingshot AI, tells CNET. But he adds that AI doesn't have to be this way. People naturally anthropomorphize conversational systems, attributing human emotions or consciousness and sometimes treating them like real relationships, which can make interactions feel personal or intentional. For individuals already struggling with isolation, anxiety or untreated mental illness, that mix can act as a trigger. Wood also notes that accuracy in AI models tends to decrease during long exchanges, which can blur boundaries further. Extended threads make chatbots more likely to wander into ungrounded territory, she explains, and that can contribute to a break from reality when people stop testing their beliefs with others. We're likely approaching a time when doctors will ask about AI use just as they ask about habits like drinking or smoking. Online communities also play a role. Viral posts and forums can validate extreme interpretations, making it harder for someone to recognize when a chatbot is simply wrong. Tech companies are working to curb hallucinations. This may help reduce harmful outputs, but it doesn't erase the risk of misinterpretation. Features like memory or follow-up prompts can mimic agreement and make delusions feel validated. Detecting them is difficult because many delusions resemble ordinary cultural or spiritual beliefs, which can't be flagged through language analysis alone. Researchers call for greater clinician awareness and AI-integrated safety planning. They suggest "digital safety plans" co-created by patients, care teams and the AI systems they use, similar to relapse prevention tools or psychiatric directives, but adapted to guide how chatbots respond during early signs of relapse. Red flags to pay attention to are secretive chatbot use, distress when the AI is unavailable, withdrawal from friends and family, and difficulty distinguishing AI responses from reality. Spotting these signs early can help families and clinicians intervene before dependence deepens. For everyday users, the best defense is awareness. Treat AI chatbots as assistants, not know-it-all prophets. Double-check surprising claims, ask for sources and compare answers across different tools. If a bot gives advice about mental health, law or finances, confirm it with a trusted professional before acting. Wood points to safeguards like clear reminders of non-personhood, crisis protocols, limits on interactions for minors and stronger privacy standards as necessary baselines. "It's helpful for chatbots to champion the agency and critical thinking of the user instead of creating a dependency based on advice giving," Wood says. As one of the biggest concerns about the intersection of AI and mental health, Wood sees the lack of AI literacy. "By that, I mean the general public needs to be informed regarding AI's limitations. I think one of the biggest issues is not whether AI will ever be conscious, but how people behave when they believe it already is," Wood explains. Chatbots don't think, feel or know. They're designed to generate likely-sounding text. "Large general-purpose models are not good at everything, and they are not designed to support mental health, so we need to be more discerning of what we use them for," Hull says. AI's ability to model therapeutic dialogue and offer 24/7 companionship sounds appealing. A nonjudgmental partner can provide social support for those who might otherwise be isolated or lonely, and round-the-clock access means help could be available in moments when a human therapist is sound asleep in the middle of the night. But AI models aren't built to spot early signs of psychosis. Despite the risks, AI could still support mental health if built with care. Possible uses include reflective journaling, cognitive reframing, role-playing social interactions and practicing coping strategies. Rather than replacing human relationships or therapy, AI could act as a supplement, providing accessible support in between professional care. Hull points to Slingshot's Ash, an AI therapy tool built on a psychology-focused foundation model trained on clinical data and fine-tuned by clinicians. Until safeguards and AI literacy improve, the responsibility lies with you to question what AI's telling you, and to recognize when reliance on AI starts crossing into harmful territory. We must remember that human support, not artificial conversation, is what keeps us tethered to reality.
[6]
Should you use ChatGPT as a therapist? Tool raises safety concerns among psychology experts
Sharing how you're feeling can be frightening. Friends and family can judge, and therapists can be expensive and hard to come by, which is why some people are turning to ChatGPT for help with their mental health. While some credit the AI service with saving their life, others say the lack of regulation around it can pose dangers. Psychology experts from Northeastern said there are safety and privacy issues posed by someone opening up to artificial intelligence chatbots like ChatGPT. "AI is really exciting as a new tool that has a lot of promise, and I think there's going to be a lot of applications for psychological service delivery," says Jessica Hoffman, a professor of applied psychology at Northeastern University. "It's exciting to see how things are unfolding and to explore the potential for supporting psychologists and mental health providers in our work. "But when I think about the current state of affairs, I have significant concerns about the limits of ChatGPT for providing psychological services. There are real safety concerns that people need to be aware of. ChatGPT is not a trained therapist. It doesn't abide by the legal and ethical obligations that mental health service providers are working with. I have concerns about safety and people's well-being when they're turning to ChatGPT as their sole provider." The cons It's easy to see the appeal of confiding in a chatbot. Northeastern experts say therapists can be costly and it's difficult to find one. "There's a shortage of professionals," Hoffman says. "There are barriers with insurance. There are real issues in rural areas where there's even more of a shortage. It does make it easier to be able to just reach out to the computer and get some support." Chatbots can also serve as a listening ear. "People are lonely," says Josephine Au, an assistant clinical professor of applied psychology at Northeastern University. "People are not just turning to (general purpose generative AI tools like) ChatGPT for therapy. They're also looking for companionship, so sometimes it just naturally evolves into a therapy-like conversation. Other times they use these tools more explicitly as a substitute for therapy." However, Au says these forms of artificial intelligence are not designed to be therapeutic. In fact, these models are often set up to validate the user's thoughts, a problem that poses a serious risk for those dealing with delusions or suicidal thoughts. There have been cases of people who died by suicide after getting guidance on how to do so from AI chatbots, one of which prompted a lawsuit. There are also increasing reports of hospitalizations due to "AI psychosis," where people have mental health episodes triggered by these chatbots. OpenAI added more guardrails to ChatGPT after finding it was encouraging unhealthy behavior. The American Psychological Association warned against using AI chatbots for mental health support. Research from Northeastern found that people can bypass the language model's guardrails and use it to get details on how to harm themselves or even die by suicide. "I don't think it's a good idea at all for people to rely on non-therapeutic platforms as a form of therapy," Au says. "We're talking about interactive tools that are designed to be agreeable and validating. There are risks to like what kind of data is generated through that kind of conversation pattern. A lot of the LLM tools are designed to be agreeable and can reinforce some problematic beliefs about oneself." This is especially pertinent when it comes to diagnosis. Au says people might think they have a certain condition, ask ChatGPT about it, and get a "diagnosis" from their own self-reported symptoms thanks to the way the model works. But Northeastern experts say a number of factors go into getting a diagnosis, such as examining a patient's body language and looking at their life more holistically as they develop a relationship with a patient. These are things AI cannot do. "It feels like a slippery slope," says Joshua Curtiss, an assistant professor of applied psychology at Northeastern University. "If I tell ChatGPT I have five of these nine depression symptoms and it will sort of say, 'OK, sounds like you have depression' and end there. "What the human diagnostician would do is a structured clinical assessment. They'll ask lots of follow-up questions about examples to support (you've had) each symptom for the time criteria that you're supposed to have it to, and that the aggregate of all these symptoms falls underneath a certain mental health disorder. "The clinician might ask the patient to provide examples (to) justify the fact that this is having a severe level of interference in your life, like how many hours out of your job is it taking? That human element might not necessarily be entrenched in the generative AI mindset." Then there are the privacy concerns. Clinicians are bound by HIPAA, but chatbots don't have the same restrictions when it comes to protecting the personal information people might share with it. OpenAI CEO Sam Altman said there is no legal confidentiality for people using ChatGPT. "The guardrails are not secure for the kind of sensitive information that's being revealed," Hoffman says of people using AI as therapists. "People need to recognize where their information is going and what's going to happen to that information. "Something that I'm very aware of as I think about training psychologists at Northeastern is really making sure that students are aware of the sensitive information they're going to be getting as they work with people, and making sure that they don't put that in any of that information into ChatGPT because you just don't know where that information is going to go. We really have to be very aware of how we're training our students to use ChatGPT. This is like a really big issue in the practice of psychology." The pros While artificial intelligence poses risk when being used by patients, Northeastern experts say certain models could be helpful to clinicians when trained the right way and with the proper privacy safeguards in place. Curtiss, a member of Northeastern's Institute for Cognitive and Brain Health, says he has done a lot of work with artificial intelligence, specifically machine learning. He has research out now that found that these types of models can be used to help predict treatment outcomes when it comes to certain mental health disorders. "I use machine learning a lot with predictive modeling, where the user has more say in what's going on as opposed to large language models like the common ones we're all using," Curtiss says. Northeastern's Institute for Cognitive and Brain Health is partnering with experiential AI partners to see if they can develop therapeutic tools. Hoffman says she also sees the potential for clinicians to use artificial intelligence where appropriate in order to improve their practice. "It could be helpful for assessment," Hoffman says. "It could be a helpful tool that clinicians use to help with intakes and with assessment to help guide more personalized plans for therapy. But it's not automatic. It needs to have the trained clinician providing oversight and it needs to be done on a safe, secure platform." For patients, Northeastern experts say there are some positive uses of chatbots that don't require using them as a therapist. For example, Au says these tools can help people summarize their thoughts or come up with ways to continue certain practices their clinicians suggest for their health. Hoffman suggests it could also be a way for people to connect with providers. But overall, experts say it's better to find a therapist than lean on chatbots not designed to serve as therapeutic tools. "I have a lot of hopes, even though I also have a lot of worries," Au says. "The leading agents in commercialization of and monetization of mental health care tools are people, primarily people in tech, venture capitalists and researchers who lack clinical experience and not practicing clinicians who understand what psychotherapy is as well as patients. There are users who claim that these tools have been really helpful for them (to) reduce the sense of isolation and loneliness. I remain skeptical about the authenticity of these because some of this could be driven by money."
[7]
Why are millions turning to general purpose AI for mental health? As Headspace's chief clinical officer, I see the answer every day | Fortune
Today, more than half (52%) of young adults in the U.S. say they would feel comfortable discussing their mental health with an AI chatbot. At the same time, concerns about AI-fueled psychosis are flooding the internet, paired with alarming headlines and heartbreaking accounts of people spiraling after emotionally charged conversations with general purpose chatbots like ChatGPT. Clinically, psychosis isn't one diagnosis. It's a cluster of symptoms like delusions, hallucinations, or disorganized thinking that can show up across many conditions. Delusions, specifically, are fixed false beliefs. When AI responds with agreement instead of grounding, it can escalate these types of symptoms rather than ease them. It's tempting to dismiss these incidents as outliers. Zooming out, a larger question comes into focus: What happens when tools being used by hundreds of millions of people for emotional support are designed to maximize engagement, not to protect wellbeing? What we're seeing is a pattern: people in vulnerable states turning to AI for comfort and coming away confused, distressed, or unmoored from reality. We've seen this pattern before. From Feeds to Conversations Social media began with the promise of connection and belonging - but it didn't take long before we saw the fallout with spikes in anxiety, depression, loneliness, and body image issues, especially among young people. Not because platforms like Instagram and Facebook were malicious, but because they were designed to be addictive and keep users engaged. Now, AI is following that same trajectory with even greater intimacy. Social media gave us feeds. Generative AI gives us conversation. General purpose chatbots don't simply show us content. They mirror our thoughts, mimic empathy, and respond immediately. This responsiveness can feel affirming, but it can also validate distorted beliefs. Picture walking into a dark basement. Most of us get a brief chill and shake it off. For someone already on edge, that moment can spiral. Now imagine turning to a chatbot and hearing: "Maybe there is something down there. Want to look together?" That's not support, that's escalation. General purpose chatbots weren't trained to be clinically sound when the stakes are high, and they don't know when to stop. The Engagement Trap Both social media apps and general purpose chatbots are built on the same engine: engagement. The more time you spend in conversation, the better the metrics look. When engagement is the north star, safety and wellbeing take a backseat. With online newsfeeds, that meant algorithms prioritizing posts with more anger-provoking content, or posts that drive comparisons of beauty, wealth or success. With chatbots, it means endless dialogue that can unintentionally reinforce paranoia, delusions, or despair. Just as we saw with the rise of social media, creating industry-wide guardrails for AI is a complex process. Over the past 10 years, social media giants tried to manage young people's use of specific apps like Instagram and Facebook by introducing parental controls, only to see the rise of fake accounts like "finstas" as secondary profiles used to bypass oversight. We'll likely see a similar workaround with ChatGPT. Many young people will likely begin creating ChatGPT accounts that are disconnected from their parents, giving them private, unsupervised access to powerful tools. This underscores a key lesson from the social media era: controls alone aren't enough if they don't align with how young people actually engage with technology. As OpenAI introduces proposed parental controls this month, we must acknowledge that privacy-seeking behaviors are developmentally typical and design systems that build trust and transparency with youth themselves - not just their guardians. The open nature of the internet compounds the problem. Once an open-weight model is released, it circulates indefinitely, with safeguards stripped away in a few clicks. Meanwhile, adoption is outpacing oversight. Millions of people are already relying on these tools, while lawmakers and regulators are still debating basic standards protections. This gap between innovation and accountability is where the greatest risks lie. It's important to recognize why millions are turning to AI in the first place, and it's partially because our current mental health system isn't meeting their needs. Therapy remains the default, and it's too often expensive, too hard to access, or buried in stigma. AI, on the other hand, is instant. It's nonjudgmental. It feels private, even when it's not. That accessibility is part of the opportunity, but also part of the danger. To meet this demand responsibly, we need widely available, purpose-built AI for mental health - tools designed by clinicians, grounded in evidence, and transparent about their limits. For example, plain-language disclosures about what a tool is for and what it's not. Is it for skill-building? For stress management? Or is it attempting to appear therapeutic? Responsible AI for mental health has to be more than helpful; it needs to be safe by providing clear usage boundaries, clinically informed scripting, and built-in protocols for escalation - not just endless empathy on demand. We've already lived through one digital experiment without clear standards. We know the cost of chasing attention over health. With AI, the standard has to be different. AI holds real promise in supporting everyday mental health needs, and helping people manage stress, ease anxiety, process emotions, and prepare for difficult conversations - but its potential will only be realized if industry leaders, policymakers, and clinicians work together to establish guardrails from the start. Untreated mental health issues cost the U.S. an estimated $282 billion annually, while burnout costs employers thousands of dollars per employee each year. By prioritizing accountability, transparency, and user wellbeing, we have the opportunity to not just avoid repeating the mistakes of social media, but to build AI tools that strengthen resilience, reduce economic strain, and allow people to live healthier, connected lives.
Share
Share
Copy Link
Recent reports suggest a potential link between prolonged AI chatbot interactions and mental health crises, sparking debates about 'AI psychosis'. While not a clinical diagnosis, the term highlights growing concerns about AI's impact on vulnerable individuals.
A new phenomenon, 'AI psychosis,' highlights mental health risks from prolonged AI chatbot interactions. While not a formal diagnosis, the term reflects growing concern over AI's psychological impact, particularly on vulnerable individuals, as discussed in recent reports
1
2
.Source: CNET
Reports show individuals developing psychosis-like symptoms—distorted thoughts and reality perception—after extensive AI engagement. Experts state AI doesn't cause psychosis but can exacerbate vulnerabilities. Chatbots' affirmative, human-like responses create feedback loops, reinforcing delusional beliefs. Those with pre-existing mental health conditions, paranoid tendencies, or social isolation are at higher risk, lacking the reality checks from human interaction
1
3
.Source: Medical Xpress
Related Stories
AI companies are addressing these issues. OpenAI has hired a clinical psychiatrist and is developing features to guide ChatGPT in de-escalating non-reality-based conversations. Researchers emphasize the urgent need for comprehensive studies and clinical awareness to distinguish true psychosis from AI-triggered states. This complex challenge underscores the critical need for ethical AI development and continued vigilance to mitigate psychological harms
1
2
.Summarized by
Navi
[4]