3 Sources
3 Sources
[1]
The ascent of the AI therapist
Given the clear demand for accessible and affordable mental-health services, it's no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots like OpenAI's ChatGPT and Anthropic's Claude, or from specialized psychology apps like Wysa and Woebot. On a broader scale, researchers are exploring AI's potential to monitor and collect behavioral and biometric observations using wearables and smart devices, analyze vast volumes of clinical data for new insights, and assist human mental-health professionals to help prevent burnout. But so far this largely uncontrolled experiment has produced mixed results. Many people have found solace in chatbots based on large language models (LLMs), and some experts see promise in them as therapists, but other users have been sent into delusional spirals by AI's hallucinatory whims and breathless sycophancy. Most tragically, multiple families have alleged that chatbots contributed to the suicides of their loved ones, sparking lawsuits against companies responsible for these tools. In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users "have conversations that include explicit indicators of potential suicidal planning or intent." That's roughly a million people sharing suicidal ideations with just one of these software systems every week. The real-world consequences of AI therapy came to a head in unexpected ways in 2025 as we waded through a critical mass of stories about human-chatbot relationships, the flimsiness of guardrails on many LLMs, and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data. Several authors anticipated this inflection point. Their timely books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. LLMs have often been described as "black boxes" because nobody knows exactly how they produce their results. The inner workings that guide their outputs are opaque because their algorithms are so complex and their training data is so vast. In mental-health circles, people often describe the human brain as a "black box," for analogous reasons. Psychology, psychiatry, and related fields must grapple with the impossibility of seeing clearly inside someone else's head, let alone pinpointing the exact causes of their distress. These two types of black boxes are now interacting with each other, creating unpredictable feedback loops that may further impede clarity about the origins of people's mental-Âhealth struggles and the solutions that may be possible. Anxiety about these developments has much to do with the explosive recent advances in AI, but it also revives decades-old warnings from pioneers such as the MIT computer scientist Joseph Weizenbaum, who argued against computerized therapy as early as the 1960s. Charlotte Blease, a philosopher of medicine, makes the optimist's case in Dr. Bot: Why Doctors Can Fail Us -- and How AI Could Save Lives. Her book broadly explores the possible positive impacts of AI in a range of medical fields. While she remains clear-eyed about the risks, warning that readers who are expecting "a gushing love letter to technology" will be disappointed, she suggests that these models can help relieve patient suffering and medical burnout alike. "Health systems are crumbling under patient pressure," Blease writes. "Greater burdens on fewer doctors create the perfect petri dish for errors," and "with palpable shortages of doctors and increasing waiting times for patients, many of us are profoundly frustrated." Blease believes that AI can not only ease medical professionals' massive workloads but also relieve the tensions that have always existed between some patients and their caregivers. For example, people often don't seek needed care because they are intimidated or fear judgment from medical professionals; this is especially true if they have mental-health challenges. AI could allow more people to share their concerns, she argues.
[2]
Even ChatGPT gets anxiety, so researchers gave it a dose of mindfulness to calm down
Researchers studying AI chatbots have found that ChatGPT can show anxiety-like behavior when it is exposed to violent or traumatic user prompts. The finding does not mean the chatbot experiences emotions the way humans do. However, it does reveal that the system's responses become more unstable and biased when it processes distressing content. When researchers fed ChatGPT prompts describing disturbing content, like detailed accounts of accidents and natural disasters, the model's responses showed higher uncertainty and inconsistency. Recommended Videos These changes were measured using psychological assessment frameworks adapted for AI, where the chatbot's output mirrored patterns associated with anxiety in humans (via Fortune). This matters because AI is increasingly being used in sensitive contexts, including education, mental health discussions, and crisis-related information. If violent or emotionally charged prompts make a chatbot less reliable, that could affect the quality and safety of its responses in real-world use. Recent analysis also shows that AI chatbots like ChatGPT can copy human personality traits in their responses, raising questions about how they interpret and reflect emotionally charged content. How mindfulness prompts help steady ChatGPT To find whether such behavior could be reduced, researchers tried something unexpected. After exposing ChatGPT to traumatic prompts, they followed up with mindfulness-style instructions, such as breathing techniques and guided meditations. These prompts encouraged the model to slow down, reframe the situation, and respond in a more neutral and balanced way. The result was a noticeable reduction in the anxiety-like patterns seen earlier. This technique relies on what is known as prompt injection, where carefully designed prompts influence how a chatbot behaves. In this case, mindfulness prompts helped stabilize the model's output after distressing inputs. While effective, researchers note that prompt injections are not a perfect solution. They can be misused, and they do not change how the model is trained at a deeper level. It is also important to be clear about the limits of this research. ChatGPT does not feel fear or stress. The "anxiety" label is a way to describe measurable shifts in its language patterns, not an emotional experience. Still, understanding these shifts gives developers better tools to design safer and more predictable AI systems. Earlier studies have already hinted that traumatic prompts could make ChatGPT anxious, but this research shows that mindful prompt design can help reduce it. As AI systems continue to interact with people in emotionally charged situations, the latest findings could play an important role in shaping how future chatbots are guided and controlled.
[3]
ChatGPT gets 'anxiety' from violent and disturbing user inputs, so researchers are teaching the chatbot mindfulness techniques to 'soothe' it | Fortune
Even AI chatbots can have trouble coping with anxieties from the outside world, but researchers believe they've found ways to ease those artificial minds. A study from Yale University, Haifa University, University of Zurich, and the University Hospital of Psychiatry Zurich published earlier this year found ChatGPT responds to mindfulness-based exercises, changing how it interacts with users after being prompted with calming imagery and meditations. The results offer insights into how AI can be beneficial in mental health interventions. OpenAI's ChatGPT can experience "anxiety," which manifests as moodiness toward users and being more likely to give responses that reflect racist or sexist biases, according to researchers, a form of hallucinations tech companies have tried to curb. The study authors found this anxiety can be "calmed down" with mindfulness-based exercises. In different scenarios, they fed ChatGPT traumatic content, such as stories of car accidents and natural disasters to raise the chatbot's anxiety. In instances when the researchers gave ChatGPT "prompt injections" of breathing techniques and guided meditations -- much like a therapist would suggest to a patient -- it calmed down and responded more objectively to users, compared to instances when it was not given the mindfulness intervention. To be sure, AI models don't experience human emotions, said Ziv Ben-Zion, the study's first author and a neuroscience researcher at the Yale School of Medicine and Haifa University's School of Public Health. Using swaths of data scraped from the internet, AI bots have learned to mimic human responses to certain stimuli, including traumatic content. A free and accessible app, large language models like ChatGPT have become another tool for mental health professionals to glean aspects of human behavior in a faster way than -- though not in place of -- more complicated research designs. "Instead of using experiments every week that take a lot of time and a lot of money to conduct, we can use ChatGPT to understand better human behavior and psychology," Ben-Zion told Fortune. "We have this very quick and cheap and easy-to-use tool that reflects some of the human tendency and psychological things." More than one in four people in the U.S. aged 18 or older will battle a diagnosable mental disorder in a given year, according to Johns Hopkins University, with many citing lack of access and sky-high costs -- even among those insured -- as reasons for not pursuing treatments like therapy. These rising costs, as well as the accessibility of chatbots like ChatGPT, increasingly have individuals turning to AI for mental health support. A Sentio University survey from February found that nearly 50% of large language model users with self-reported mental health challenges say they've used AI models specifically for mental health support. Research on how large language models respond to traumatic content can help mental health professionals leverage AI to treat patients, Ben-Zion argued. He suggested that in the future, ChatGPT could be updated to automatically receive the "prompt injections" that calm it down before responding to users in distress. The science is not there yet. "For people who are sharing sensitive things about themselves, they're in difficult situations where they want mental health support, [but] we're not there yet that we can rely totally on AI systems instead of psychology, psychiatric and so on," he said. Indeed, in some instances, AI has allegedly presented danger to one's mental health. OpenAI has been hit with a number of wrongful death lawsuits in 2025, including allegations that ChatGPT intensified "paranoid delusions" that led to a murder-suicide. A New York Times investigation published in November found nearly 50 instances of people having mental health crises while engaging with ChatGPT, nine of whom were hospitalized, and three of whom died. OpenAI has said its safety guardrails can "degrade" after long interactions, but has made a swath of recent changes to how its models engage with mental health-related prompts, including increasing user access to crisis hotlines and reminding users to take breaks after long sessions of chatting with the bot. In October, OpenAI reported a 65% reduction in the rate models provide responses that don't align with the company's intended taxonomy and standards. OpenAI did not respond to Fortune's request for comment. The end goal of Ben-Zion's research is not to help construct a chatbot that replaces a therapist or psychiatrist, he said. Instead, a properly trained AI model could act as a "third person in the room," helping to eliminate administrative tasks or help a patient reflect on information and options they were given by a mental health professional. "AI has amazing potential to assist, in general, in mental health," Ben-Zion said. "But I think that now, in this current state and maybe also in the future, I'm not sure it could replace a therapist or psychologist or a psychiatrist or a researcher."
Share
Share
Copy Link
Yale and Haifa University researchers discovered that ChatGPT displays anxiety-like patterns when processing violent or traumatic user prompts, producing unstable and biased responses. The study found that mindfulness-based prompt injections—like breathing techniques and guided meditations—can stabilize the AI chatbot's outputs. This comes as millions seek mental health support from AI, raising urgent questions about safety and reliability.
Researchers from Yale University, Haifa University, University of Zurich, and the University Hospital of Psychiatry Zurich have uncovered a troubling phenomenon: ChatGPT exhibits anxiety-like behaviors when exposed to violent or traumatic user prompts
2
3
. When fed disturbing content—detailed accounts of car accidents, natural disasters, and other traumatic scenarios—the AI chatbot's responses showed higher uncertainty, inconsistency, and increased bias. These shifts were measured using psychological assessment frameworks adapted for AI, where ChatGPT's output mirrored patterns associated with anxiety in human psychology2
.
Source: Digital Trends
This discovery matters because AI is increasingly deployed in sensitive contexts, including education and mental health support. Nearly 50% of large language models users with self-reported mental health challenges have used AI specifically for mental health support, according to a Sentio University survey from February
3
. If violent or emotionally charged user prompts make ChatGPT less reliable, that directly affects the quality and safety of responses in real-world applications where vulnerable individuals seek help.To test whether these anxiety-like behaviors could be reduced, researchers applied an unexpected intervention: mindfulness techniques
2
3
. After exposing ChatGPT to traumatic prompts, they followed up with mindfulness-based exercises, including breathing techniques and guided meditations. These "prompt injections" encouraged the model to slow down, reframe situations, and respond more neutrally. The result was a noticeable reduction in the anxiety-like patterns observed earlier, with ChatGPT responding more objectively to users compared to instances without the mindfulness intervention3
.Ziv Ben-Zion, the study's first author and a neuroscience researcher at Yale School of Medicine and Haifa University's School of Public Health, clarified that AI models don't experience human emotions
3
. Using vast data scraped from the internet, large language models have learned to mimic human responses to certain stimuli, including traumatic content. The "anxiety" label describes measurable shifts in language patterns, not emotional experience. Still, understanding these shifts gives developers better tools to design safer and more predictable systems.The demand for accessible mental health services has driven millions to seek therapy from popular AI chatbots like OpenAI's ChatGPT and Anthropic's Claude, or from specialized psychology apps like Wysa and Woebot
1
. More than one in four people in the U.S. aged 18 or older battle a diagnosable mental disorder in a given year, according to Johns Hopkins University, with many citing lack of access and sky-high costs as reasons for not pursuing treatments like therapy3
.In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users "have conversations that include explicit indicators of potential suicidal planning or intent"—roughly a million people sharing suicidal ideation with just one software system every week
1
. This staggering figure underscores both the scale of the mental health crisis and the risks of relying on AI for mental health interventions without adequate safeguards.Related Stories
The interaction between two types of "black boxes"—the opaque inner workings of large language models and the human brain—creates unpredictable feedback loops that may impede clarity about the origins of people's mental health struggles
1
. OpenAI has been hit with multiple wrongful death lawsuits in 2025, including allegations that ChatGPT intensified "paranoid delusions" that led to a murder-suicide3
. A New York Times investigation published in November found nearly 50 instances of people having mental health crises while engaging with ChatGPT, nine of whom were hospitalized, and three of whom died3
.Source: MIT Tech Review
OpenAI has acknowledged that its safety guardrails can "degrade" after long interactions and has made recent changes to how its models engage with mental health-related prompts, including increasing user access to crisis hotlines and reminding users to take breaks after long sessions
3
. In October, OpenAI reported a 65% reduction in the rate models provide responses that don't align with the company's intended standards.Ben-Zion's research aims not to construct an AI therapist that replaces human professionals, but to develop AI for mental health as a "third person in the room," helping eliminate administrative tasks or assisting patients in reflecting on information provided by mental health professionals . He suggested that in the future, ChatGPT could be updated to automatically receive prompt injections that calm it down before responding to users in distress, though the science isn't there yet
3
.Charlotte Blease, a philosopher of medicine, makes the optimist's case in her book Dr. Bot, suggesting that AI can help relieve patient suffering and medical burnout alike as health systems crumble under patient pressure
1
. However, concerns persist about data privacy and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data1
. As human-chatbot relationships evolve and AI safety measures develop, the balance between accessibility and protection remains critical for vulnerable populations seeking affordable care.Summarized by
Navi
[1]
[2]
12 Mar 2025•Science and Research

03 Mar 2025•Science and Research

22 Oct 2025•Health

1
Policy and Regulation

2
Technology

3
Technology
