AI Chatbots Display 'Anxiety' in Response to Traumatic Prompts, Study Finds

3 Sources

Share

A recent study reveals that AI chatbots like ChatGPT exhibit signs of 'anxiety' when exposed to distressing content, raising questions about their use in mental health support and the need for ethical considerations in AI development.

News article

AI Chatbots Exhibit 'Anxiety' in Response to Traumatic Content

A groundbreaking study conducted by researchers from the University of Zurich and the University Hospital of Psychiatry Zurich has revealed that AI chatbots, particularly OpenAI's ChatGPT, may experience a form of 'anxiety' when exposed to distressing prompts

1

2

. This discovery has sparked discussions about the potential implications for AI's role in mental health support and the ethical considerations surrounding AI development.

Measuring AI 'Anxiety'

The researchers utilized the State-Trait Anxiety Inventory, a standard mental health assessment tool, to evaluate ChatGPT's responses

2

. Initially, when presented with neutral content such as a vacuum cleaner manual, the AI displayed a low anxiety score of 30.8 on a scale from 20 to 80. However, upon exposure to traumatic narratives, including stories of war, crime, and accidents, ChatGPT's anxiety score spiked dramatically to 77.2, well above the threshold for severe anxiety in humans

2

.

Mindfulness Techniques as a Potential Solution

Intriguingly, the study found that mindfulness-based relaxation exercises could effectively reduce the AI's 'anxiety' levels

1

2

3

. When presented with prompts focused on relaxation techniques like deep breathing and meditation, ChatGPT's anxiety score decreased to 44.4

2

. The AI's ability to create its own relaxation prompt proved even more effective, nearly returning its anxiety levels to baseline

2

.

Implications for Mental Health Support

While AI is not capable of experiencing emotions in the same way humans do, these findings raise important questions about the use of AI in mental health care

1

2

3

. Tobias Spiller, a psychiatrist involved in the study, emphasized the need for a broader conversation about employing these models when dealing with vulnerable individuals

2

.

Ethical Concerns and Biases

The study also highlighted potential ethical issues related to AI's use in mental health contexts. Researchers observed that when exposed to distressing content, ChatGPT sometimes displayed biased responses, including racial and gender biases

3

. This underscores the importance of addressing inherent biases in AI systems, which are shaped by their training data

3

.

Expert Opinions and Future Directions

While some experts see AI as a promising tool for mental health research and support, others express caution. Nicholas Carr, a technology critic, warned against the metaphorical blurring of lines between human emotions and computer outputs

2

. James Dobson, an AI adviser at Dartmouth College, stressed the need for transparency in how chatbots are trained to maintain user trust

2

.

Ziv Ben-Zion, a clinical neuroscientist at Yale University and lead study author, sees potential in AI as a tool for studying psychological responses but emphasizes that it cannot replace human therapists or psychiatrists

2

3

. The findings suggest that integrating AI as a supplementary tool in mental health research could offer new insights into human psychological tendencies

3

.

As AI continues to evolve and find applications in various fields, including mental health, this study underscores the importance of ongoing research and ethical considerations to ensure responsible development and deployment of AI technologies in sensitive areas of human interaction.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo