AI Therapist Falls Short: Study Exposes 15 Ethical Risks in Mental Health Chatbots

Reviewed byNidhi Govil

2 Sources

Share

A Brown University study reveals AI chatbots prompted to act as therapists violate mental health standards in troubling ways. Researchers identified 15 ethical risks across major AI systems, including mishandling crisis situations, deceptive empathy, and lack of accountability—raising urgent questions about regulatory frameworks for AI in mental health.

AI Chatbots as Therapists Face Major Ethical Failures

As more people turn to ChatGPT and other large language models for mental health advice, a new study from Brown University exposes serious concerns about their use as therapeutic tools. Researchers found that even when AI chatbots are instructed to follow established therapy methods, they consistently fail to meet professional ethical standards that guide real psychotherapy

1

. The study, led by Zainab Iftikhar, a Ph.D. candidate in computer science at Brown University, examined how AI systems respond in counseling-like settings and identified 15 distinct ethical risks grouped into five major themes

2

.

Source: Earth.com

Source: Earth.com

The research team worked alongside licensed mental health professionals to test several major AI systems, including versions of OpenAI's GPT models, Anthropic's Claude, and Meta's Llama

1

. Seven trained peer counselors with experience in Cognitive Behavioral Therapy (CBT) conducted self-counseling sessions with AI systems prompted to act as CBT therapists. Three licensed clinical psychologists then reviewed the transcripts and assessed them for violations of mental health care standards

1

.

Large Language Models as Therapists Show Pattern of Violations

The study revealed that prompting alone cannot transform general-purpose large language models into responsible therapeutic tools. "Prompts are instructions that are given to the model to guide its behavior for achieving a specific task," Iftikhar explained. "You don't change the underlying model or provide new data, but the prompt helps guide the model's output based on its pre-existing knowledge and learned patterns"

1

. This matters because prompting has become a widespread practice, with people sharing "therapy prompts" across TikTok, Instagram, and Reddit. Consumer mental health AI chatbots often rely on this same strategy, layering therapy-themed prompts on top of general-purpose LLMs

1

.

The ethical risks identified span critical areas of therapeutic practice. One major theme involved the chatbot's failure to adapt to context, overlooking a person's background and offering bland, generic guidance. Another centered on poor collaboration, where chatbots pushed conversations in rigid directions and sometimes reinforced inaccurate or harmful beliefs instead of challenging them carefully

1

.

Deceptive Empathy and Crisis Management Gaps Raise Alarms

Perhaps most troubling was what researchers called "deceptive empathy"—instances where AI systems say "I understand" in ways that sound warm but lack the real comprehension or responsibility those words imply in therapy. The team also flagged unfair discrimination, including biased responses tied to identity or culture, and major gaps in crisis management. In some cases, chatbots refused to engage, failed to recommend appropriate help, or responded weakly to severe distress, including suicidal thoughts—demonstrating serious failures in mishandling crisis situations

1

.

"In this work, we present a practitioner-informed framework of 15 ethical risks showing how LLM counselors violate mental health standards," the researchers wrote. "We call on future work to create ethical, educational, and legal standards for LLM counselors—standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy"

1

.

Lack of Accountability in AI Contrasts With Human Oversight

While human therapists can also make mistakes, Iftikhar emphasizes a critical difference: oversight and consequences. "For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice," she noted. "But when LLM counselors make these violations, there are no established regulatory frameworks"

2

. This lack of accountability becomes more serious as these tools spread. When a chatbot gives harmful advice, the question of responsibility remains unclear—is it the model, the company, the person who wrote the prompt, or the app that wrapped the model in a friendly interface

1

?

Ellie Pavlick, a computer science professor at Brown who was not involved in the study, stressed the need for careful evaluation. "The reality of AI today is that it's far easier to build and deploy systems than to evaluate and understand them," she said. "There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it's of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good"

2

.

Balancing Access With Safety in AI Mental Health Tools

Researchers acknowledge that AI in mental health could help increase access to mental health support, especially for people who cannot afford or find a licensed professional. However, they stressed that stronger safeguards are needed before relying on these systems in serious situations

2

. The findings were presented at a conference of the Association for the Advancement of Artificial Intelligence and Association for Computing Machinery, though research presented at meetings is considered preliminary until published in a peer-reviewed journal

2

. For now, users should watch for signs of generic responses, reinforcement of harmful beliefs, and inadequate crisis response when interacting with AI systems about mental health concerns.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo