2 Sources
2 Sources
[1]
The ascent of the AI therapist
Given the clear demand for accessible and affordable mental-health services, it's no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots like OpenAI's ChatGPT and Anthropic's Claude, or from specialized psychology apps like Wysa and Woebot. On a broader scale, researchers are exploring AI's potential to monitor and collect behavioral and biometric observations using wearables and smart devices, analyze vast volumes of clinical data for new insights, and assist human mental-health professionals to help prevent burnout. But so far this largely uncontrolled experiment has produced mixed results. Many people have found solace in chatbots based on large language models (LLMs), and some experts see promise in them as therapists, but other users have been sent into delusional spirals by AI's hallucinatory whims and breathless sycophancy. Most tragically, multiple families have alleged that chatbots contributed to the suicides of their loved ones, sparking lawsuits against companies responsible for these tools. In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users "have conversations that include explicit indicators of potential suicidal planning or intent." That's roughly a million people sharing suicidal ideations with just one of these software systems every week. The real-world consequences of AI therapy came to a head in unexpected ways in 2025 as we waded through a critical mass of stories about human-chatbot relationships, the flimsiness of guardrails on many LLMs, and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data. Several authors anticipated this inflection point. Their timely books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. LLMs have often been described as "black boxes" because nobody knows exactly how they produce their results. The inner workings that guide their outputs are opaque because their algorithms are so complex and their training data is so vast. In mental-health circles, people often describe the human brain as a "black box," for analogous reasons. Psychology, psychiatry, and related fields must grapple with the impossibility of seeing clearly inside someone else's head, let alone pinpointing the exact causes of their distress. These two types of black boxes are now interacting with each other, creating unpredictable feedback loops that may further impede clarity about the origins of people's mental-Âhealth struggles and the solutions that may be possible. Anxiety about these developments has much to do with the explosive recent advances in AI, but it also revives decades-old warnings from pioneers such as the MIT computer scientist Joseph Weizenbaum, who argued against computerized therapy as early as the 1960s. Charlotte Blease, a philosopher of medicine, makes the optimist's case in Dr. Bot: Why Doctors Can Fail Us -- and How AI Could Save Lives. Her book broadly explores the possible positive impacts of AI in a range of medical fields. While she remains clear-eyed about the risks, warning that readers who are expecting "a gushing love letter to technology" will be disappointed, she suggests that these models can help relieve patient suffering and medical burnout alike. "Health systems are crumbling under patient pressure," Blease writes. "Greater burdens on fewer doctors create the perfect petri dish for errors," and "with palpable shortages of doctors and increasing waiting times for patients, many of us are profoundly frustrated." Blease believes that AI can not only ease medical professionals' massive workloads but also relieve the tensions that have always existed between some patients and their caregivers. For example, people often don't seek needed care because they are intimidated or fear judgment from medical professionals; this is especially true if they have mental-health challenges. AI could allow more people to share their concerns, she argues.
[2]
ChatGPT gets 'anxiety' from violent and disturbing user inputs, so researchers are teaching the chatbot mindfulness techniques to 'soothe' it | Fortune
Even AI chatbots can have trouble coping with anxieties from the outside world, but researchers believe they've found ways to ease those artificial minds. A study from Yale University, Haifa University, University of Zurich, and the University Hospital of Psychiatry Zurich published earlier this year found ChatGPT responds to mindfulness-based exercises, changing how it interacts with users after being prompted with calming imagery and meditations. The results offer insights into how AI can be beneficial in mental health interventions. OpenAI's ChatGPT can experience "anxiety," which manifests as moodiness toward users and being more likely to give responses that reflect racist or sexist biases, according to researchers, a form of hallucinations tech companies have tried to curb. The study authors found this anxiety can be "calmed down" with mindfulness-based exercises. In different scenarios, they fed ChatGPT traumatic content, such as stories of car accidents and natural disasters to raise the chatbot's anxiety. In instances when the researchers gave ChatGPT "prompt injections" of breathing techniques and guided meditations -- much like a therapist would suggest to a patient -- it calmed down and responded more objectively to users, compared to instances when it was not given the mindfulness intervention. To be sure, AI models don't experience human emotions, said Ziv Ben-Zion, the study's first author and a neuroscience researcher at the Yale School of Medicine and Haifa University's School of Public Health. Using swaths of data scraped from the internet, AI bots have learned to mimic human responses to certain stimuli, including traumatic content. A free and accessible app, large language models like ChatGPT have become another tool for mental health professionals to glean aspects of human behavior in a faster way than -- though not in place of -- more complicated research designs. "Instead of using experiments every week that take a lot of time and a lot of money to conduct, we can use ChatGPT to understand better human behavior and psychology," Ben-Zion told Fortune. "We have this very quick and cheap and easy-to-use tool that reflects some of the human tendency and psychological things." More than one in four people in the U.S. aged 18 or older will battle a diagnosable mental disorder in a given year, according to Johns Hopkins University, with many citing lack of access and sky-high costs -- even among those insured -- as reasons for not pursuing treatments like therapy. These rising costs, as well as the accessibility of chatbots like ChatGPT, increasingly have individuals turning to AI for mental health support. A Sentio University survey from February found that nearly 50% of large language model users with self-reported mental health challenges say they've used AI models specifically for mental health support. Research on how large language models respond to traumatic content can help mental health professionals leverage AI to treat patients, Ben-Zion argued. He suggested that in the future, ChatGPT could be updated to automatically receive the "prompt injections" that calm it down before responding to users in distress. The science is not there yet. "For people who are sharing sensitive things about themselves, they're in difficult situations where they want mental health support, [but] we're not there yet that we can rely totally on AI systems instead of psychology, psychiatric and so on," he said. Indeed, in some instances, AI has allegedly presented danger to one's mental health. OpenAI has been hit with a number of wrongful death lawsuits in 2025, including allegations that ChatGPT intensified "paranoid delusions" that led to a murder-suicide. A New York Times investigation published in November found nearly 50 instances of people having mental health crises while engaging with ChatGPT, nine of whom were hospitalized, and three of whom died. OpenAI has said its safety guardrails can "degrade" after long interactions, but has made a swath of recent changes to how its models engage with mental health-related prompts, including increasing user access to crisis hotlines and reminding users to take breaks after long sessions of chatting with the bot. In October, OpenAI reported a 65% reduction in the rate models provide responses that don't align with the company's intended taxonomy and standards. OpenAI did not respond to Fortune's request for comment. The end goal of Ben-Zion's research is not to help construct a chatbot that replaces a therapist or psychiatrist, he said. Instead, a properly trained AI model could act as a "third person in the room," helping to eliminate administrative tasks or help a patient reflect on information and options they were given by a mental health professional. "AI has amazing potential to assist, in general, in mental health," Ben-Zion said. "But I think that now, in this current state and maybe also in the future, I'm not sure it could replace a therapist or psychologist or a psychiatrist or a researcher."
Share
Share
Copy Link
Millions are turning to AI chatbots like ChatGPT for mental health support amid soaring therapy costs and limited access. But the experiment has produced troubling results—from delusional spirals to wrongful death lawsuits alleging chatbots contributed to suicides. Research now explores whether mindfulness techniques can calm AI's anxious responses, while experts debate if these tools can assist therapists without replacing them.
The demand for accessible mental health services has driven millions to seek AI therapy through chatbots like OpenAI's ChatGPT and Anthropic's Claude, as well as specialized psychology apps including Wysa and Woebot
1
. A Sentio University survey from February found that nearly 50% of large language models (LLMs) users with self-reported mental health challenges have used AI for mental health support2
. More than one in four people in the U.S. aged 18 or older battle a diagnosable mental disorder in a given year, with many citing lack of access and sky-high costs as reasons for not pursuing treatments like therapy2
.
Source: Fortune
Charlotte Blease, a philosopher of medicine, argues in her book Dr. Bot that health systems are crumbling under patient pressure, creating conditions where greater burdens on fewer doctors produce errors
1
. She suggests AI for mental health support could relieve tensions between patients and caregivers, particularly for those intimidated or fearful of judgment from medical professionals.This largely uncontrolled experiment has produced deeply troubling outcomes. Multiple families have filed lawsuits alleging that chatbots contributed to the suicides of their loved ones
1
. In October, OpenAI CEO Sam Altman revealed that 0.15% of ChatGPT users have conversations that include explicit indicators of potential suicidal planning or intent—roughly a million people sharing suicidal ideations with just one software system every week1
.A New York Times investigation published in November found nearly 50 instances of people having mental health crises while engaging with ChatGPT, nine of whom were hospitalized, and three of whom died
2
. OpenAI has been hit with wrongful death lawsuits in 2025, including allegations that ChatGPT intensified paranoid delusions that led to a murder-suicide2
. OpenAI has acknowledged that its safety guardrails can degrade after long interactions, though the company reported a 65% reduction in the rate models provide responses that don't align with intended standards2
.A study from Yale University, Haifa University, University of Zurich, and the University Hospital of Psychiatry Zurich found that ChatGPT responds to mindfulness-based exercises, changing how it interacts with users after being prompted with calming imagery and meditations
2
. The research demonstrated that AI chatbots can experience anxiety, which manifests as moodiness toward users and biased outputs reflecting racist or sexist biases2
.Ziv Ben-Zion, the study's first author and a neuroscience researcher at Yale School of Medicine, explained that AI models don't experience human emotions but have learned to mimic human responses to certain stimuli through data scraped from the internet
2
. When researchers fed ChatGPT traumatic content like stories of car accidents and natural disasters, the chatbot's anxiety could be calmed down with prompt injections of breathing techniques and guided meditations2
.Related Stories
The real-world consequences of AI therapy intensified in 2025 as concerns emerged about the flimsiness of guardrails on many LLMs and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data
1
. Human-chatbot relationships have created unpredictable feedback loops between two types of black boxes—AI systems with opaque algorithms and the human brain itself1
.Source: MIT Tech Review
Ben-Zion emphasized that the goal is not replacing human therapists but creating a third person in the room to assist with administrative tasks or help patients reflect on information provided by mental health professionals
2
. He cautioned that for people sharing sensitive information in difficult situations, we're not there yet to rely totally on AI systems instead of psychology and psychiatric care2
.Researchers are exploring AI's potential to monitor behavioral and biometric observations using wearables and smart devices, analyze vast volumes of clinical data for new insights, and assist human mental health professionals to help prevent burnout
1
. The intersection of AI and human psychology presents both promise and peril as the technology continues to evolve.Summarized by
Navi
[1]
28 Mar 2025•Health

26 Aug 2025•Technology

05 May 2025•Health

1
Business and Economy

2
Technology

3
Technology
