The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 13 Feb, 8:03 AM UTC
2 Sources
[1]
AI vs. Human Therapists: Study Finds ChatGPT Responses Rated Higher - Neuroscience News
Summary: A new study suggests that ChatGPT's responses in psychotherapy scenarios are often rated higher than those written by human therapists. Researchers found that participants struggled to distinguish between AI-generated and therapist-written responses in couple's therapy vignettes. ChatGPT's responses were generally longer and contained more nouns and adjectives, providing greater contextualization. This additional detail may have contributed to higher ratings on core psychotherapy principles. The findings highlight AI's potential role in therapeutic interventions while raising ethical and practical concerns about its integration into mental health care. Researchers emphasize the need for professionals to engage with AI developments to ensure responsible oversight. When it comes to comparing responses written by psychotherapists to those written by ChatGPT,the latter are generally rated higher, according to a study published February 12, 2025, in the open-access journal PLOS Mental Health by H. Dorian Hatch, from The Ohio State University and co-founder of Hatch Data and Mental Health, and colleagues Whether machines could be therapists is a question that has received increased attention given some of the benefits of working with generative artificial intelligence (AI). Although previous research has found that humans can struggle to tell the difference between responses from machines and humans, recent findings suggest that AI can write empathically and the generated content is rated highly by both mental health professionals and voluntary service users to the extent that it is often favored over content written by professionals. In their new study involving over 800 participants, Hatch and colleagues showed that, although differences in language patterns were noticed, individuals could rarely identify whether responses were written by ChatGPT or by therapists when presented with 18 couple's therapy vignettes. This finding echoes Alan Turing's prediction that humans would be unable to tell the difference between responses written by a machine and those written by a human. In addition, the responses written by ChatGPT were generally rated higher in core psychotherapy guiding principles. Further analysis revealed that the responses generated by ChatGPT were generally longer than those written by the therapists. After controlling for length, ChatGPT continued to respond with more nouns and adjectives than therapists. Considering that nouns can be used to describe people, places, and things, and adjectives can be used to provide more context, this could mean that ChatGPT contextualizes more extensively than the therapists. More extensive contextualization may have led respondents to rate the ChatGPT responses higher on the common factors of therapy (components that are common to all modalities of therapy in order to achieve desired results). According to the authors, these results may be an early indication that ChatGPT has the potential to improve psychotherapeutic processes. In particular, this work may lead to the development of different methods of testing and creating psychotherapeutic interventions. Given the mounting evidence suggesting that generative AI can be useful in therapeutic settings and the likelihood that it might be integrated into therapeutic settings sooner rather than later, the authors call for mental health experts to expand their technical literacy in order to ensure that AI models are being carefully trained and supervised by responsible professionals, thus improving quality of, and access to care. The authors add: "Since the invention of ELIZA nearly sixty years ago, researchers have debated whether AI could play the role of a therapist. Although there are still many important lingering questions, our findings indicate the answer may be "Yes." "We hope our work galvanizes both the public and Mental Practitioners to ask important questions about the ethics, feasibility, and utility of integrating AI and mental health treatment, before the AI train leaves the station." When ELIZA meets therapists: A Turing test for the heart and mind "Can machines be therapists?" is a question receiving increased attention given the relative ease of working with generative artificial intelligence. Although recent (and decades-old) research has found that humans struggle to tell the difference between responses from machines and humans, recent findings suggest that artificial intelligence can write empathically and the generated content is rated highly by therapists and outperforms professionals. It is uncertain whether, in a preregistered competition where therapists and ChatGPT respond to therapeutic vignettes about couple therapy, a) a panel of participants can tell which responses are ChatGPT-generated and which are written by therapists (N = 13), b) the generated responses or the therapist-written responses fall more in line with key therapy principles, and c) linguistic differences between conditions are present. In a large sample (N = 830), we showed that a) participants could rarely tell the difference between responses written by ChatGPT and responses written by a therapist, b) the responses written by ChatGPT were generally rated higher in key psychotherapy principles, and c) the language patterns between ChatGPT and therapists were different. Using different measures, we then confirmed that responses written by ChatGPT were rated higher than the therapist's responses suggesting these differences may be explained by part-of-speech and response sentiment. This may be an early indication that ChatGPT has the potential to improve psychotherapeutic processes. We anticipate that this work may lead to the development of different methods of testing and creating psychotherapeutic interventions. Further, we discuss limitations (including the lack of the therapeutic context), and how continued research in this area may lead to improved efficacy of psychotherapeutic interventions allowing such interventions to be placed in the hands of individuals who need them the most.
[2]
ChatGPT outperforms psychotherapists in therapy response ratings, study shows
PLOSFeb 12 2025 When it comes to comparing responses written by psychotherapists to those written by ChatGPT,the latter are generally rated higher, according to a study published February 12, 2025, in the open-access journal PLOS Mental Health by H. Dorian Hatch, from The Ohio State University and co-founder of Hatch Data and Mental Health, and colleagues Whether machines could be therapists is a question that has received increased attention given some of the benefits of working with generative artificial intelligence (AI). Although previous research has found that humans can struggle to tell the difference between responses from machines and humans, recent findings suggest that AI can write empathically and the generated content is rated highly by both mental health professionals and voluntary service users to the extent that it is often favored over content written by professionals. In their new study involving over 800 participants, Hatch and colleagues showed that, although differences in language patterns were noticed, individuals could rarely identify whether responses were written by ChatGPT or by therapists when presented with 18 couple's therapy vignettes. This finding echoes Alan Turing's prediction that humans would be unable to tell the difference between responses written by a machine and those written by a human. In addition, the responses written by ChatGPT were generally rated higher in core psychotherapy guiding principles. Further analysis revealed that the responses generated by ChatGPT were generally longer than those written by the therapists. After controlling for length, ChatGPT continued to respond with more nouns and adjectives than therapists. Considering that nouns can be used to describe people, places, and things, and adjectives can be used to provide more context, this could mean that ChatGPT contextualizes more extensively than the therapists. More extensive contextualization may have led respondents to rate the ChatGPT responses higher on the common factors of therapy (components that are common to all modalities of therapy in order to achieve desired results). According to the authors, these results may be an early indication that ChatGPT has the potential to improve psychotherapeutic processes. In particular, this work may lead to the development of different methods of testing and creating psychotherapeutic interventions. Given the mounting evidence suggesting that generative AI can be useful in therapeutic settings and the likelihood that it might be integrated into therapeutic settings sooner rather than later, the authors call for mental health experts to expand their technical literacy in order to ensure that AI models are being carefully trained and supervised by responsible professionals, thus improving quality of, and access to care. The authors add: "Since the invention of ELIZA nearly sixty years ago, researchers have debated whether AI could play the role of a therapist. Although there are still many important lingering questions, our findings indicate the answer may be "Yes." We hope our work galvanizes both the public and Mental Practitioners to ask important questions about the ethics, feasibility, and utility of integrating AI and mental health treatment, before the AI train leaves the station." PLOS Journal reference: Gabe Hatch, S., et al. (2025) When ELIZA meets therapists: A Turing test for the heart and mind. PLOS Mental Health. doi.org/10.1371/journal.pmen.0000145.
Share
Share
Copy Link
A groundbreaking study reveals that ChatGPT's responses in couple's therapy scenarios are rated higher than those of human therapists, raising questions about AI's potential role in mental health care.
A groundbreaking study published in PLOS Mental Health on February 12, 2025, has revealed that ChatGPT, an artificial intelligence language model, outperforms human psychotherapists in generating responses to couple's therapy scenarios 12. The research, led by H. Dorian Hatch from The Ohio State University, challenges long-held assumptions about the role of AI in mental health care and raises important questions about the future of psychotherapy.
The study involved over 800 participants who were presented with 18 couple's therapy vignettes. Responses to these scenarios were generated by both ChatGPT and human therapists. Key findings include:
Further analysis of the responses revealed significant differences in language patterns between ChatGPT and human therapists:
These findings suggest that ChatGPT may have the potential to improve psychotherapeutic processes and could lead to the development of new methods for testing and creating psychotherapeutic interventions.
The study draws parallels to Alan Turing's prediction that humans would eventually be unable to distinguish between machine and human-generated responses. It also references ELIZA, an early natural language processing computer program created in the 1960s to simulate a psychotherapist 1.
As AI continues to demonstrate proficiency in empathetic writing and therapeutic responses, the integration of AI into mental health care seems increasingly likely. This raises several important considerations:
While the study's findings are promising, the authors emphasize that many important questions remain unanswered. They call for both the public and mental health practitioners to engage in discussions about the ethics, feasibility, and utility of integrating AI into mental health treatment 12.
As the field of AI in mental health care continues to evolve rapidly, it is crucial for stakeholders to address these challenges proactively to ensure responsible and beneficial integration of AI technologies in psychotherapy.
Reference
[1]
[2]
Dartmouth researchers conduct the first clinical trial of an AI-powered therapy chatbot, Therabot, demonstrating significant improvements in depression, anxiety, and eating disorder symptoms.
7 Sources
7 Sources
A recent study reveals that AI chatbots like ChatGPT exhibit signs of 'anxiety' when exposed to distressing content, raising questions about their use in mental health support and the need for ethical considerations in AI development.
3 Sources
3 Sources
A study reveals that AI language models like ChatGPT can experience elevated 'anxiety' levels when exposed to traumatic narratives, but these levels can be reduced through mindfulness exercises.
5 Sources
5 Sources
The American Psychological Association warns about the dangers of AI chatbots masquerading as therapists, citing cases of harm to vulnerable users and calling for regulatory action.
4 Sources
4 Sources
A recent study reveals that ChatGPT, when used alone, significantly outperformed both human doctors and doctors using AI assistance in diagnosing medical conditions, raising questions about the future of AI in healthcare.
6 Sources
6 Sources