ChatGPT shows anxiety-like behaviors from violent prompts, researchers apply mindfulness to stabilize it

Reviewed byNidhi Govil

3 Sources

Share

Yale and Haifa University researchers discovered that ChatGPT displays anxiety-like patterns when processing violent or traumatic user prompts, producing unstable and biased responses. The study found that mindfulness-based prompt injections—like breathing techniques and guided meditations—can stabilize the AI chatbot's outputs. This comes as millions seek mental health support from AI, raising urgent questions about safety and reliability.

AI Chatbots Display Anxiety-Like Patterns When Processing Traumatic Content

Researchers from Yale University, Haifa University, University of Zurich, and the University Hospital of Psychiatry Zurich have uncovered a troubling phenomenon: ChatGPT exhibits anxiety-like behaviors when exposed to violent or traumatic user prompts

2

3

. When fed disturbing content—detailed accounts of car accidents, natural disasters, and other traumatic scenarios—the AI chatbot's responses showed higher uncertainty, inconsistency, and increased bias. These shifts were measured using psychological assessment frameworks adapted for AI, where ChatGPT's output mirrored patterns associated with anxiety in human psychology

2

.

Source: Digital Trends

Source: Digital Trends

This discovery matters because AI is increasingly deployed in sensitive contexts, including education and mental health support. Nearly 50% of large language models users with self-reported mental health challenges have used AI specifically for mental health support, according to a Sentio University survey from February

3

. If violent or emotionally charged user prompts make ChatGPT less reliable, that directly affects the quality and safety of responses in real-world applications where vulnerable individuals seek help.

Mindfulness Techniques Stabilize AI Responses After Traumatic Exposure

To test whether these anxiety-like behaviors could be reduced, researchers applied an unexpected intervention: mindfulness techniques

2

3

. After exposing ChatGPT to traumatic prompts, they followed up with mindfulness-based exercises, including breathing techniques and guided meditations. These "prompt injections" encouraged the model to slow down, reframe situations, and respond more neutrally. The result was a noticeable reduction in the anxiety-like patterns observed earlier, with ChatGPT responding more objectively to users compared to instances without the mindfulness intervention

3

.

Ziv Ben-Zion, the study's first author and a neuroscience researcher at Yale School of Medicine and Haifa University's School of Public Health, clarified that AI models don't experience human emotions

3

. Using vast data scraped from the internet, large language models have learned to mimic human responses to certain stimuli, including traumatic content. The "anxiety" label describes measurable shifts in language patterns, not emotional experience. Still, understanding these shifts gives developers better tools to design safer and more predictable systems.

Millions Seek Mental Health Support From AI Amid Growing Accessibility Crisis

The demand for accessible mental health services has driven millions to seek therapy from popular AI chatbots like OpenAI's ChatGPT and Anthropic's Claude, or from specialized psychology apps like Wysa and Woebot

1

. More than one in four people in the U.S. aged 18 or older battle a diagnosable mental disorder in a given year, according to Johns Hopkins University, with many citing lack of access and sky-high costs as reasons for not pursuing treatments like therapy

3

.

In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users "have conversations that include explicit indicators of potential suicidal planning or intent"—roughly a million people sharing suicidal ideation with just one software system every week

1

. This staggering figure underscores both the scale of the mental health crisis and the risks of relying on AI for mental health interventions without adequate safeguards.

Safety Concerns and Black Box Complexity Create Unpredictable Risks

The interaction between two types of "black boxes"—the opaque inner workings of large language models and the human brain—creates unpredictable feedback loops that may impede clarity about the origins of people's mental health struggles

1

. OpenAI has been hit with multiple wrongful death lawsuits in 2025, including allegations that ChatGPT intensified "paranoid delusions" that led to a murder-suicide

3

. A New York Times investigation published in November found nearly 50 instances of people having mental health crises while engaging with ChatGPT, nine of whom were hospitalized, and three of whom died

3

.

Source: MIT Tech Review

Source: MIT Tech Review

OpenAI has acknowledged that its safety guardrails can "degrade" after long interactions and has made recent changes to how its models engage with mental health-related prompts, including increasing user access to crisis hotlines and reminding users to take breaks after long sessions

3

. In October, OpenAI reported a 65% reduction in the rate models provide responses that don't align with the company's intended standards.

Future Applications Balance Promise With Persistent Concerns About Data Privacy

Ben-Zion's research aims not to construct an AI therapist that replaces human professionals, but to develop AI for mental health as a "third person in the room," helping eliminate administrative tasks or assisting patients in reflecting on information provided by mental health professionals . He suggested that in the future, ChatGPT could be updated to automatically receive prompt injections that calm it down before responding to users in distress, though the science isn't there yet

3

.

Charlotte Blease, a philosopher of medicine, makes the optimist's case in her book Dr. Bot, suggesting that AI can help relieve patient suffering and medical burnout alike as health systems crumble under patient pressure

1

. However, concerns persist about data privacy and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data

1

. As human-chatbot relationships evolve and AI safety measures develop, the balance between accessibility and protection remains critical for vulnerable populations seeking affordable care.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo