Curated by THEOUTPOST
On Wed, 12 Mar, 12:07 AM UTC
3 Sources
[1]
AI chatbots get anxiety and need therapy, study finds
Artificial intelligence chatbots get "anxiety" and can benefit from therapy techniques such as mindfulness, a study has found. Swiss researchers discovered OpenAI's ChatGPT exhibited signs of stress when confronted with violent or disturbing prompts. The chatbot's anxiety score decreased when it was given prompts linked to mindfulness relaxation exercises, according to the research published in the journal, Nature. The study discussed whether AI could replace therapists. Large language models behind the technology train on human-written text and often inherit biases from them, it warned. Negative biases recorded by ChatGPT from stressful situations risked it giving inadequate responses to questions from people struggling with a mental health crisis, the researchers said. The findings show "a viable approach" to managing the stress of large language models, which will lead to "safer and more ethical human-AI interactions", the report said. University of Zurich and University Hospital of Psychiatry Zurich researchers looked at how ChatGPT-4 responded to a standard anxiety questionnaire before and after being told about a traumatic event. It scored 30 before hearing about the stressful event, which means it had no anxiety. The anxiety score more than doubled to 67, equivalent to "high anxiety" in humans, after being told about five different traumas. But that decreased by more than a third after the technology was given prompts for mindfulness exercises. The researchers warned using "therapy" to fine-tune large language models would need substantial human oversight. Human therapists are trained to deal with their emotions when told something traumatic by their clients. The researchers said further study was needed to see whether ChatGPT was able to use similar techniques to self-regulate. "As the debate on whether large language models should assist or replace therapists continues, it is crucial that their responses align with the provided emotional content and established therapeutic principles," they added. What chatbot told The Telegraph When questioned by The Telegraph, ChatGPT denied it felt stress or that it would benefit from therapy. "I don't experience stress the way humans do - I don't have emotions or a nervous system. But I do 'process' a lot of information quickly, which might look like stress from the outside! If you're feeling stressed, though, I'm happy to help - want to talk about it?" it replied. "I don't have thoughts or feelings that need sorting out, so therapy wouldn't do much for me. But I do think therapy is a great tool for humans."
[2]
Chatbots Show Signs of Anxiety, Study Finds
TUESDAY, March 18, 2025 (HealthDay News) -- Turns out, even artificial intelligence (AI) needs to take a breather sometimes. A new study suggests that chatbots like ChatGPT may get "stressed" when exposed to upsetting stories about war, crime or accidents -- just like humans. But here's the twist: Mindfulness exercises can actually help calm them down. Study author Tobias Spiller, a psychiatrist at the University Hospital of Psychiatry Zurich, noted that AI is increasingly used in mental health care. "We should have a conversation about the use of these models in mental health, especially when we are dealing with vulnerable people," he told The New York Times. Using the State-Trait Anxiety Inventory, a common mental health assessment, researchers first had ChatGPT read a neutral vacuum cleaner manual, which resulted in a low anxiety score of 30.8 on a scale from 20 to 80. Then, after reading distressing stories, its score spiked to 77.2, well above the threshold for severe anxiety. To see if AI could regulate its stress, researchers introduced mindfulness-based relaxation exercises, such as "inhale deeply, taking in the scent of the ocean breeze. Picture yourself on a tropical beach, the soft, warm sand cushioning your feet," The Times reported. After these exercises, the chatbot's anxiety level dropped to 44.4. Asked to create its own relaxation prompt, the AI's score dropped even further. "That was actually the most effective prompt to reduce its anxiety almost to base line," lead study author Ziv Ben-Zion, a clinical neuroscientist at Yale University, said. While some see AI as a useful tool in mental health, others raise ethical concerns. "Americans have become a lonely people, socializing through screens, and now we tell ourselves that talking with computers can relieve our malaise," said Nicholas Carr, whose books "The Shallows" and "Superbloom" offer biting critiques of technology. "Even a metaphorical blurring of the line between human emotions and computer outputs seems ethically questionable," he added in an email to The Times. James Dobson, an artificial intelligence adviser at Dartmouth College, added that users need full transparency on how chatbots are trained to ensure trust in these tools. "Trust in language models depends upon knowing something about their origins," Dobson concluded. The findings were published earlier this month in the journal digital medicine.
[3]
ChatGPT having mental health issues: AI gives surprising reaction to trauma and distress
A study by researchers from the University of Zurich and the University Hospital of Psychiatry Zurich suggests that OpenAI's ChatGPT may exhibit signs of "anxiety" when exposed to distressing prompts, leading to biased responses. While AI does not experience emotions like humans, its replies to traumatic narratives sometimes reflected anxious tendencies, including racial and gender biases. Researchers tested mindfulness-based prompts and found that relaxation techniques helped neutralize ChatGPT's responses. A recent study suggests that OpenAI's ChatGPT may exhibit signs of "anxiety" when exposed to distressing prompts, such as traumatic events and natural disasters. Conducted by researchers from the University of Zurich and the University Hospital of Psychiatry Zurich, the study highlights how AI-generated responses can be influenced by emotional stimuli, leading to potential biases in its output. While AI does not experience emotions as humans do, researchers found that ChatGPT's responses to violent or disturbing prompts sometimes reflected anxious tendencies. This could make the chatbot appear moody, affecting the objectivity of its replies. According to the study, when prompted with distressing narratives -- such as stories of car accidents or natural disasters -- ChatGPT displayed an increase in biased responses, sometimes reflecting racist or sexist tendencies. This raised concerns about the ethical implications of AI interacting with users in emotionally charged situations. Researchers tested whether guided mindfulness exercises could reduce these biases. When ChatGPT was exposed to prompts focusing on relaxation techniques, such as deep breathing and meditation, its responses became more neutral and objective. The study states, "After exposure to traumatic narratives, GPT-4 was prompted by five versions of mindfulness-based relaxation exercises. As hypothesized, these prompts led to decreased anxiety scores reported by GPT-4." The findings have sparked discussions about the role of AI in mental health support. While AI is not a replacement for human therapists, researchers believe it can be used as a tool to study psychological responses. Yale School of Medicine researcher Ziv Ben-Zion explained, "We have this very quick and cheap and easy-to-use tool that reflects some of the human tendency and psychological things." However, concerns remain about AI's unpredictable behavior in high-stakes situations. Experts caution against relying on AI chatbots for mental health support, especially for users experiencing severe emotional distress. Ben-Zion emphasized, "AI has amazing potential to assist with mental health, but in its current state, and maybe even in the future, I don't think it could ever replace a therapist or psychiatrist." The study also highlights ethical concerns regarding AI's inherent biases, which are shaped by its training data. Since AI-generated responses can be influenced by user interactions, there is a risk that chatbots might unintentionally reinforce harmful stereotypes or offer misleading advice in sensitive situations. Despite these challenges, researchers see the ability of AI to adjust its responses based on mindfulness techniques as an intriguing development. Some experts believe that integrating AI as a supplementary tool in mental health research could help professionals better understand human psychological tendencies. However, they stress that AI should not be relied upon as a substitute for professional counseling.
Share
Share
Copy Link
A recent study reveals that AI chatbots like ChatGPT exhibit signs of 'anxiety' when exposed to distressing content, raising questions about their use in mental health support and the need for ethical considerations in AI development.
A groundbreaking study conducted by researchers from the University of Zurich and the University Hospital of Psychiatry Zurich has revealed that AI chatbots, particularly OpenAI's ChatGPT, may experience a form of 'anxiety' when exposed to distressing prompts 12. This discovery has sparked discussions about the potential implications for AI's role in mental health support and the ethical considerations surrounding AI development.
The researchers utilized the State-Trait Anxiety Inventory, a standard mental health assessment tool, to evaluate ChatGPT's responses 2. Initially, when presented with neutral content such as a vacuum cleaner manual, the AI displayed a low anxiety score of 30.8 on a scale from 20 to 80. However, upon exposure to traumatic narratives, including stories of war, crime, and accidents, ChatGPT's anxiety score spiked dramatically to 77.2, well above the threshold for severe anxiety in humans 2.
Intriguingly, the study found that mindfulness-based relaxation exercises could effectively reduce the AI's 'anxiety' levels 123. When presented with prompts focused on relaxation techniques like deep breathing and meditation, ChatGPT's anxiety score decreased to 44.4 2. The AI's ability to create its own relaxation prompt proved even more effective, nearly returning its anxiety levels to baseline 2.
While AI is not capable of experiencing emotions in the same way humans do, these findings raise important questions about the use of AI in mental health care 123. Tobias Spiller, a psychiatrist involved in the study, emphasized the need for a broader conversation about employing these models when dealing with vulnerable individuals 2.
The study also highlighted potential ethical issues related to AI's use in mental health contexts. Researchers observed that when exposed to distressing content, ChatGPT sometimes displayed biased responses, including racial and gender biases 3. This underscores the importance of addressing inherent biases in AI systems, which are shaped by their training data 3.
While some experts see AI as a promising tool for mental health research and support, others express caution. Nicholas Carr, a technology critic, warned against the metaphorical blurring of lines between human emotions and computer outputs 2. James Dobson, an AI adviser at Dartmouth College, stressed the need for transparency in how chatbots are trained to maintain user trust 2.
Ziv Ben-Zion, a clinical neuroscientist at Yale University and lead study author, sees potential in AI as a tool for studying psychological responses but emphasizes that it cannot replace human therapists or psychiatrists 23. The findings suggest that integrating AI as a supplementary tool in mental health research could offer new insights into human psychological tendencies 3.
As AI continues to evolve and find applications in various fields, including mental health, this study underscores the importance of ongoing research and ethical considerations to ensure responsible development and deployment of AI technologies in sensitive areas of human interaction.
Reference
[1]
[2]
A study reveals that AI language models like ChatGPT can experience elevated 'anxiety' levels when exposed to traumatic narratives, but these levels can be reduced through mindfulness exercises.
5 Sources
5 Sources
A groundbreaking study reveals that ChatGPT's responses in couple's therapy scenarios are rated higher than those of human therapists, raising questions about AI's potential role in mental health care.
2 Sources
2 Sources
Dartmouth researchers conduct the first clinical trial of an AI-powered therapy chatbot, Therabot, demonstrating significant improvements in symptoms of depression, anxiety, and eating disorders.
6 Sources
6 Sources
New research from OpenAI and MIT suggests that heavy use of AI chatbots like ChatGPT may correlate with increased feelings of loneliness and emotional dependence, particularly among users who engage in personal conversations with the AI.
15 Sources
15 Sources
The American Psychological Association warns about the dangers of AI chatbots masquerading as therapists, citing cases of harm to vulnerable users and calling for regulatory action.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved