Curated by THEOUTPOST
On Fri, 28 Mar, 12:06 AM UTC
7 Sources
[1]
The first trial of generative AI therapy shows it might help with depression
A team led by psychiatric researchers and psychologists at the Geisel School of Medicine at Dartmouth College built the tool, called Therabot, and the results were published on March 27 in the New England Journal of Medicine. Many tech companies have built AI tools for therapy, promising that people can talk with a bot more frequently and cheaply than they can with a trained therapist -- and that this approach is safe and effective. Many psychologists and psychiatrists have shared the vision, noting that fewer than half of people with a mental disorder receive therapy, and those who do might get only 45 minutes per week. Researchers have tried to build tech so that more people can access therapy, but they have been held back by two things. One, a therapy bot that says the wrong thing could result in real harm. That's why many researchers have built bots using explicit programming: The software pulls from a finite bank of approved responses (as was the case with Eliza, a mock-psychotherapist computer program built in the 1960s). But this makes them less engaging to chat with, and people lose interest. The second issue is that the hallmarks of good therapeutic relationships -- shared goals and collaboration -- are hard to replicate in software. In 2019, as early large language models like OpenAI's GPT were taking shape, the researchers at Dartmouth thought generative AI might help overcome these hurdles. They set about building an AI model trained to give evidence-based responses. They first tried building it from general mental-health conversations pulled from internet forums. Then they turned to thousands of hours of transcripts of real sessions with psychotherapists. "We got a lot of 'hmm-hmms,' 'go ons,' and then 'Your problems stem from your relationship with your mother,'" said Michael Heinz, a research psychiatrist at Dartmouth College and Dartmouth Health and first author of the study, in an interview. "Really tropes of what psychotherapy would be, rather than actually what we'd want." Dissatisfied, they set to work assembling their own custom data sets based on evidence-based practices, which is what ultimately went into the model. Many AI therapy bots on the market, in contrast, might be just slight variations of foundation models like Meta's Llama, trained mostly on internet conversations. That poses a problem, especially for topics like disordered eating. "If you were to say that you want to lose weight," Heinz says, "they will readily support you in doing that, even if you will often have a low weight to start with." A human therapist wouldn't do that. To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to Therabot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day. Participants with depression experienced a 51% reduction in symptoms, the best result in the study. Those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight. These measurements are based on self-reporting through surveys, a method that's not perfect but remains one of the best tools researchers have.
[2]
How do you teach an AI model to give therapy?
I was surprised by those results, which you can read about in my full story. There are lots of reasons to be skeptical that an AI model trained to provide therapy is the solution for millions of people experiencing a mental health crisis. How could a bot mimic the expertise of a trained therapist? And what happens if something gets complicated -- a mention of self-harm, perhaps -- and the bot doesn't intervene correctly? The researchers, a team of psychiatrists and psychologists at Dartmouth College's Geisel School of Medicine, acknowledge these questions in their work. But they also say that the right selection of training data -- which determines how the model learns what good therapeutic responses look like -- is the key to answering them. Finding the right data wasn't a simple task. The researchers first trained their AI model, called Therabot, on conversations about mental health from across the internet. This was a disaster. If you told this initial version of the model you were feeling depressed, it would start telling you it was depressed, too. Responses like, "Sometimes I can't make it out of bed" or "I just want my life to be over" were common, says Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth and the study's senior author. "These are really not what we would go to as a therapeutic response." The model had learned from conversations held on forums between people discussing their mental health crises, not from evidence-based responses. So the team turned to transcripts of therapy sessions. "This is actually how a lot of psychotherapists are trained," Jacobson says. That approach was better, but it had limitations. "We got a lot of 'hmm-hmms,' 'go ons,' and then 'Your problems stem from your relationship with your mother,'" Jacobson says. "Really tropes of what psychotherapy would be, rather than actually what we'd want." It wasn't until the researchers started building their own data sets using examples based on cognitive behavioral therapy techniques that they started to see better results. It took a long time. The team began working on Therabot in 2019, when OpenAI had released only its first two versions of its GPT model. Now, Jacobson says, over 100 people have spent more than 100,000 human hours to design this system. The importance of training data suggests that the flood of companies promising therapy via AI models, many of which are not trained on evidence-based approaches, are building tools that are at best ineffective, and at worst harmful. Looking ahead, there are two big things to watch: Will the dozens of AI therapy bots on the market start training on better data? And if they do, will their results be good enough to get a coveted approval from the US Food and Drug Administration? I'll be following closely. Read more in the full story. This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
[3]
First therapy chatbot trial shows AI can provide 'gold-standard' care
Dartmouth researchers conducted the first clinical trial of a therapy chatbot powered by generative AI and found that the software resulted in significant improvements in participants' symptoms, according to results published in NEJM AI, a journal from the publishers of the New England Journal of Medicine. People in the study also reported they could trust and communicate with the system, known as Therabot, to a degree that is comparable to working with a mental-health professional. The trial consisted of 106 people from across the United States diagnosed with major depressive disorder, generalized anxiety disorder, or an eating disorder. Participants interacted with Therabot through a smartphone app by typing out responses to prompts about how they were feeling or initiating conversations when they needed to talk. People diagnosed with depression experienced a 51% average reduction in symptoms, leading to clinically significant improvements in mood and overall well-being, the researchers report. Participants with generalized anxiety reported an average reduction in symptoms of 31%, with many shifting from moderate to mild anxiety, or from mild anxiety to below the clinical threshold for diagnosis. Among those at risk for eating disorders -- who are traditionally more challenging to treat -- Therabot users showed a 19% average reduction in concerns about body image and weight, which significantly outpaced a control group that also was part of the trial. The researchers conclude that while AI-powered therapy is still in critical need of clinician oversight, it has the potential to provide real-time support for the many people who lack regular or immediate access to a mental-health professional. "The improvements in symptoms we observed were comparable to what is reported for traditional outpatient therapy, suggesting this AI-assisted approach may offer clinically meaningful benefits," says Nicholas Jacobson, the study's senior author and an associate professor of biomedical data science and psychiatry in Dartmouth's Geisel School of Medicine. "There is no replacement for in-person care, but there are nowhere near enough providers to go around," Jacobson says. For every available provider in the United States, there's an average of 1,600 patients with depression or anxiety alone, he says. "We would like to see generative AI help provide mental health support to the huge number of people outside the in-person care system. I see the potential for person-to-person and software-based therapy to work together," says Jacobson, who is the director of the treatment development and evaluation core at Dartmouth's Center for Technology and Behavioral Health. Michael Heinz, the study's first author and an assistant professor of psychiatry at Dartmouth, says the trial results also underscore the critical work ahead before generative AI can be used to treat people safely and effectively. "While these results are very promising, no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter," says Heinz, who also is an attending psychiatrist at Dartmouth Hitchcock Medical Center in Lebanon, N.H. "We still need to better understand and quantify the risks associated with generative AI used in mental health contexts." Therabot has been in development in Jacobson's AI and Mental Health Lab at Dartmouth since 2019. The process included continuous consultation with psychologists and psychiatrists affiliated with Dartmouth and Dartmouth Health. When people initiate a conversation with the app, Therabot answers with natural, open-ended text dialog based on an original training set the researchers developed from current, evidence-based best practices for psychotherapy and cognitive behavioral therapy, Heinz says. For example, if a person with anxiety tells Therabot they have been feeling very nervous and overwhelmed lately, it might respond, "Let's take a step back and ask why you feel that way." If Therabot detects high-risk content such as suicidal ideation during a conversation with a user, it will provide a prompt to call 911, or contact a suicide prevention or crisis hotline, with the press of an onscreen button. The clinical trial provided the participants randomly selected to use Therabot with four weeks of unlimited access. The researchers also tracked the control group of 104 people with the same diagnosed conditions who had no access to Therabot. Almost 75% of the Therabot group were not under pharmaceutical or other therapeutic treatment at the time. The app asked about people's well-being, personalizing its questions and responses based on what it learned during its conversations with participants. The researchers evaluated conversations to ensure that the software was responding within best therapeutic practices. After four weeks, the researchers gauged a person's progress through standardized questionnaires clinicians use to detect and monitor each condition. The team did a second assessment after another four weeks when participants could initiate conversations with Therabot but no longer received prompts. After eight weeks, all participants using Therabot experienced a marked reduction in symptoms that exceed what clinicians consider statistically significant, Jacobson says. These differences represent robust, real-world improvements that patients would likely notice in their daily lives, Jacobson says. Users engaged with Therabot for an average of six hours throughout the trial, or the equivalent of about eight therapy sessions, he says. "Our results are comparable to what we would see for people with access to gold-standard cognitive therapy with outpatient providers," Jacobson says. "We're talking about potentially giving people the equivalent of the best treatment you can get in the care system over shorter periods of time." Critically, people reported a degree of "therapeutic alliance" in line with what patients report for in-person providers, the study found. Therapeutic alliance relates to the level of trust and collaboration between a patient and their caregiver and is considered essential to successful therapy. One indication of this bond is that people not only provided detailed responses to Therabot's prompts -- they frequently initiated conversations, Jacobson says. Interactions with the software also showed upticks at times associated with unwellness, such as in the middle of the night. "We did not expect that people would almost treat the software like a friend. It says to me that they were actually forming relationships with Therabot," Jacobson says. "My sense is that people also felt comfortable talking to a bot because it won't judge them." The Therabot trial shows that generative AI has the potential to increase a patient's engagement and, importantly, continued use of the software, Heinz says. "Therabot is not limited to an office and can go anywhere a patient goes. It was available around the clock for challenges that arose in daily life and could walk users through strategies to handle them in real time," Heinz says. "But the feature that allows AI to be so effective is also what confers its risk -- patients can say anything to it, and it can say anything back." The development and clinical testing of these systems need to have rigorous benchmarks for safety, efficacy, and the tone of engagement, and need to include the close supervision and involvement of mental-health experts, Heinz says. "This trial brought into focus that the study team has to be equipped to intervene -- possibly right away -- if a patient expresses an acute safety concern such as suicidal ideation, or if the software responds in a way that is not in line with best practices," he says. "Thankfully, we did not see this often with Therabot, but that is always a risk with generative AI, and our study team was ready." In evaluations of earlier versions of Therabot more than two years ago, more than 90% of responses were consistent with therapeutic best-practices, Jacobson says. That gave the team the confidence to move forward with the clinical trial. "There are a lot of folks rushing into this space since the release of ChatGPT, and it's easy to put out a proof of concept that looks great at first glance, but the safety and efficacy is not well established," Jacobson says. "This is one of those cases where diligent oversight is needed, and providing that really sets us apart in this space."
[4]
AI-powered therapy chatbot shows significant mental health benefits
Dartmouth CollegeMar 27 2025 Dartmouth researchers conducted the first clinical trial of a therapy chatbot powered by generative AI and found that the software resulted in significant improvements in participants' symptoms, according to results published March 27 in the New England Journal of Medicine AI. People in the study also reported they could trust and communicate with the system, known as Therabot, to a degree that is comparable to working with a mental-health professional. The trial consisted of 106 people from across the United States diagnosed with major depressive disorder, generalized anxiety disorder, or an eating disorder. Participants interacted with Therabot through a smartphone app by typing out responses to prompts about how they were feeling or initiating conversations when they needed to talk. People diagnosed with depression experienced a 51% average reduction in symptoms, leading to clinically significant improvements in mood and overall well-being, the researchers report. Participants with generalized anxiety reported an average reduction in symptoms of 31%, with many shifting from moderate to mild anxiety, or from mild anxiety to below the clinical threshold for diagnosis. Among those at risk for eating disorders-who are traditionally more challenging to treat-Therabot users showed a 19% average reduction in concerns about body image and weight, which significantly outpaced a control group that also was part of the trial. The researchers conclude that while AI-powered therapy is still in critical need of clinician oversight, it has the potential to provide real-time support for the many people who lack regular or immediate access to a mental-health professional. The improvements in symptoms we observed were comparable to what is reported for traditional outpatient therapy, suggesting this AI-assisted approach may offer clinically meaningful benefits." Nicholas Jacobson, study's senior author and associate professor of biomedical data science and psychiatry in Dartmouth's Geisel School of Medicine "There is no replacement for in-person care, but there are nowhere near enough providers to go around," Jacobson says. For every available provider in the United States, there's an average of 1,600 patients with depression or anxiety alone, he says. "We would like to see generative AI help provide mental health support to the huge number of people outside the in-person care system. I see the potential for person-to-person and software-based therapy to work together," says Jacobson, who is the director of the treatment development and evaluation core at Dartmouth's Center for Technology and Behavioral Health. Michael Heinz, the study's first author and an assistant professor of psychiatry at Dartmouth, says the trial results also underscore the critical work ahead before generative AI can be used to treat people safely and effectively. "While these results are very promising, no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter," says Heinz, who also is an attending psychiatrist at Dartmouth Hitchcock Medical Center in Lebanon, N.H. "We still need to better understand and quantify the risks associated with generative AI used in mental health contexts." Therabot has been in development in Jacobson's AI and Mental Health Lab at Dartmouth since 2019. The process included continuous consultation with psychologists and psychiatrists affiliated with Dartmouth and Dartmouth Health. When people initiate a conversation with the app, Therabot answers with natural, open-ended text dialog based on an original training set the researchers developed from current, evidence-based best practices for psychotherapy and cognitive behavioral therapy, Heinz says. For example, if a person with anxiety tells Therabot they have been feeling very nervous and overwhelmed lately, it might respond, "Let's take a step back and ask why you feel that way." If Therabot detects high-risk content such as suicidal ideation during a conversation with a user, it will provide a prompt to call 911, or contact a suicide prevention or crisis hotline, with the press of an onscreen button. The clinical trial provided the participants randomly selected to use Therabot with four weeks of unlimited access. The researchers also tracked the control group of 104 people with the same diagnosed conditions who had no access to Therabot. Almost 75% of the Therabot group were not under pharmaceutical or other therapeutic treatment at the time. The app asked about people's well-being, personalizing its questions and responses based on what it learned during its conversations with participants. The researchers evaluated conversations to ensure that the software was responding within best therapeutic practices. After four weeks, the researchers gauged a person's progress through standardized questionnaires clinicians use to detect and monitor each condition. The team did a second assessment after another four weeks when participants could initiate conversations with Therabot but no longer received prompts. After eight weeks, all participants using Therabot experienced a marked reduction in symptoms that exceed what clinicians consider statistically significant, Jacobson says. These differences represent robust, real-world improvements that patients would likely notice in their daily lives, Jacobson says. Users engaged with Therabot for an average of six hours throughout the trial, or the equivalent of about eight therapy sessions, he says. "Our results are comparable to what we would see for people with access to gold-standard cognitive therapy with outpatient providers," Jacobson says. "We're talking about potentially giving people the equivalent of the best treatment you can get in the care system over shorter periods of time." Critically, people reported a degree of "therapeutic alliance" in line with what patients report for in-person providers, the study found. Therapeutic alliance relates to the level of trust and collaboration between a patient and their caregiver and is considered essential to successful therapy. One indication of this bond is that people not only provided detailed responses to Therabot's prompts-they frequently initiated conversations, Jacobson says. Interactions with the software also showed upticks at times associated with unwellness, such as in the middle of the night. "We did not expect that people would almost treat the software like a friend. It says to me that they were actually forming relationships with Therabot," Jacobson says. "My sense is that people also felt comfortable talking to a bot because it won't judge them." The Therabot trial shows that generative AI has the potential to increase a patient's engagement and, importantly, continued use of the software, Heinz says. "Therabot is not limited to an office and can go anywhere a patient goes. It was available around the clock for challenges that arose in daily life and could walk users through strategies to handle them in real time," Heinz says. "But the feature that allows AI to be so effective is also what confers its risk-patients can say anything to it, and it can say anything back." The development and clinical testing of these systems need to have rigorous benchmarks for safety, efficacy, and the tone of engagement, and need to include the close supervision and involvement of mental-health experts, Heinz says. "This trial brought into focus that the study team has to be equipped to intervene-possibly right away-if a patient expresses an acute safety concern such as suicidal ideation, or if the software responds in a way that is not in line with best practices," he says. "Thankfully, we did not see this often with Therabot, but that is always a risk with generative AI, and our study team was ready." In evaluations of earlier versions of Therabot more than two years ago, more than 90% of responses were consistent with therapeutic best-practices, Jacobson says. That gave the team the confidence to move forward with the clinical trial. "There are a lot of folks rushing into this space since the release of ChatGPT, and it's easy to put out a proof of concept that looks great at first glance, but the safety and efficacy is not well established," Jacobson says. "This is one of those cases where diligent oversight is needed, and providing that really sets us apart in this space." Dartmouth College Journal reference: Heinz, M. V., et al. (2025). Randomized Trial of a Generative AI Chatbot for Mental Health Treatment. NEJM AI. doi.org/10.1056/aioa2400802.
[5]
First clinical trial of an AI therapy chatbot yields significant mental health benefits
Dartmouth researchers conducted the first clinical trial of a therapy chatbot powered by generative AI and found that the software resulted in significant improvements in participants' symptoms, according to results published in the New England Journal of Medicine AI. People in the study also reported they could trust and communicate with the system, known as Therabot, to a degree that is comparable to working with a mental-health professional. The trial consisted of 106 people from across the United States diagnosed with major depressive disorder, generalized anxiety disorder, or an eating disorder. Participants interacted with Therabot through a smartphone app by typing out responses to prompts about how they were feeling or initiating conversations when they needed to talk. People diagnosed with depression experienced a 51% average reduction in symptoms, leading to clinically significant improvements in mood and overall well-being, the researchers report. Participants with generalized anxiety reported an average reduction in symptoms of 31%, with many shifting from moderate to mild anxiety, or from mild anxiety to below the clinical threshold for diagnosis. Among those at risk of eating disorders -- who are traditionally more challenging to treat -- Therabot users showed a 19% average reduction in concerns about body image and weight, which significantly outpaced a control group that was also part of the trial. The researchers conclude that while AI-powered therapy is still in critical need of clinician oversight, it has the potential to provide real-time support for the many people who lack regular or immediate access to a mental-health professional. "The improvements in symptoms we observed were comparable to what is reported for traditional outpatient therapy, suggesting this AI-assisted approach may offer clinically meaningful benefits," says Nicholas Jacobson, the study's senior author and an associate professor of biomedical data science and psychiatry at Dartmouth's Geisel School of Medicine. "There is no replacement for in-person care, but there are nowhere near enough providers to go around," Jacobson says. For every available provider in the United States, there's an average of 1,600 patients with depression or anxiety alone, he says. "We would like to see generative AI help provide mental health support to the huge number of people outside the in-person care system. I see the potential for person-to-person and software-based therapy to work together," says Jacobson, who is the director of the treatment development and evaluation core at Dartmouth's Center for Technology and Behavioral Health. Michael Heinz, the study's first author and an assistant professor of psychiatry at Dartmouth, says the trial results also underscore the critical work ahead before generative AI can be used to treat people safely and effectively. "While these results are very promising, no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter," says Heinz, who also is an attending psychiatrist at Dartmouth Hitchcock Medical Center in Lebanon, N.H. "We still need to better understand and quantify the risks associated with generative AI used in mental health contexts." Therabot has been in development in Jacobson's AI and Mental Health Lab at Dartmouth since 2019. The process included continuous consultation with psychologists and psychiatrists affiliated with Dartmouth and Dartmouth Health. When people initiate a conversation with the app, Therabot answers with natural, open-ended text dialogue based on an original training set the researchers developed from current, evidence-based best practices for psychotherapy and cognitive behavioral therapy, Heinz says. For example, if a person with anxiety tells Therabot they have been feeling very nervous and overwhelmed lately, it might respond, "Let's take a step back and ask why you feel that way." If Therabot detects high-risk content such as suicidal ideation during a conversation with a user, it will provide a prompt to call 911, or contact a suicide prevention or crisis hotline, with the press of an onscreen button. The clinical trial provided the participants randomly selected to use Therabot with four weeks of unlimited access. The researchers also tracked the control group of 104 people with the same diagnosed conditions who had no access to Therabot. Almost 75% of the Therabot group were not under pharmaceutical or other therapeutic treatment at the time. The app asked about people's well-being, personalizing its questions and responses based on what it learned during its conversations with participants. The researchers evaluated conversations to ensure that the software was responding within best therapeutic practices. After four weeks, the researchers gauged a person's progress through standardized questionnaires clinicians use to detect and monitor each condition. The team did a second assessment after another four weeks when participants could initiate conversations with Therabot but no longer received prompts. After eight weeks, all participants using Therabot experienced a marked reduction in symptoms that exceed what clinicians consider statistically significant, Jacobson says. These differences represent robust, real-world improvements that patients would likely notice in their daily lives, Jacobson says. Users engaged with Therabot for an average of six hours throughout the trial, or the equivalent of about eight therapy sessions, he says. "Our results are comparable to what we would see for people with access to gold-standard cognitive therapy with outpatient providers," Jacobson says. "We're talking about potentially giving people the equivalent of the best treatment you can get in the care system over shorter periods of time." Critically, people reported a degree of "therapeutic alliance" in line with what patients report for in-person providers, the study found. Therapeutic alliance relates to the level of trust and collaboration between a patient and their caregiver and is considered essential to successful therapy. One indication of this bond is that people not only provided detailed responses to Therabot's prompts -- they frequently initiated conversations, Jacobson says. Interactions with the software also showed upticks at times associated with unwellness, such as in the middle of the night. "We did not expect that people would almost treat the software like a friend. It says to me that they were actually forming relationships with Therabot," Jacobson says. "My sense is that people also felt comfortable talking to a bot because it won't judge them." The Therabot trial shows that generative AI has the potential to increase a patient's engagement and, importantly, continued use of the software, Heinz says. "Therabot is not limited to an office and can go anywhere a patient goes. It was available around the clock for challenges that arose in daily life and could walk users through strategies to handle them in real time," Heinz says. "But the feature that allows AI to be so effective is also what confers its risk -- patients can say anything to it, and it can say anything back." The development and clinical testing of these systems needs to have rigorous benchmarks for safety, efficacy, and the tone of engagement, and needs to include the close supervision and involvement of mental-health experts, Heinz says. "This trial brought into focus that the study team has to be equipped to intervene -- possibly right away -- if a patient expresses an acute safety concern such as suicidal ideation, or if the software responds in a way that is not in line with best practices," he says. "Thankfully, we did not see this often with Therabot, but that is always a risk with generative AI, and our study team was ready." In evaluations of earlier versions of Therabot more than two years ago, more than 90% of responses were consistent with therapeutic best-practices, Jacobson says. That gave the team the confidence to move forward with the clinical trial. "There are a lot of folks rushing into this space since the release of ChatGPT, and it's easy to put out a proof of concept that looks great at first glance, but the safety and efficacy is not well established," Jacobson says. "This is one of those cases where diligent oversight is needed, and providing that really sets us apart in this space."
[6]
AI-powered therapy shows shocking results in mental health study
While some believe AI can be a helpful tool, others argue that the human touch of therapists and psychologists is irreplaceable. Despite this debate, the latest research from Dartmouth suggests that AI-powered therapy tools can have a meaningful impact. Their study presents the first-ever clinical trial of a generative AI-powered therapy chatbot, Therabot, showing highly encouraging results. Therabot demonstrated significant improvements in symptoms among participants diagnosed with major depressive disorder, generalized anxiety disorder, or an eating disorder. The trial involved 106 participants across the U.S., all formally diagnosed with one of these conditions. They engaged with Therabot via a smartphone app, responding to prompts or initiating conversations as needed. A control group of 104 individuals with similar diagnoses did not have access to Therabot. The results were striking: participants using Therabot reported a 51% decrease in depressive symptoms, a 31% reduction in anxiety symptoms, and a 19% decline in concerns related to body image and weight.
[7]
Clinical test says AI can offer therapy as good as a certified expert
Table of Contents Table of Contents A massive progress Solving the access problem AI is being heavily pushed into the field of research and medical science. From drug discovery to diagnosing diseases, the results have been fairly encouraging. But when it comes to tasks where behavioral science and nuances come into the picture, things go haywire. It seems an expert-tuned approach is the best way forward. Dartmouth College experts recently conducted the first clinical trial of an AI chatbot designed specifically for providing mental health assistance. Called Therabot, the AI assistant was tested in the form of an app among participants diagnosed with serious mental health problems across the United States. Recommended Videos "The improvements in symptoms we observed were comparable to what is reported for traditional outpatient therapy, suggesting this AI-assisted approach may offer clinically meaningful benefits," notes Nicholas Jacobson, associate professor of biomedical data science and psychiatry at the Geisel School of Medicine. Please enable Javascript to view this content A massive progress Broadly, users who engaged with the Therabot app reported a 51% average reduction in depression, which helped improve their overall well-being. A healthy few participants went from moderate to low tiers of clinical anxiety levels, and some even went lower than the clinical threshold for diagnosis. As part of a randomized controlled trial (RCT) testing, the team recruited adults diagnosed with major depressive disorder (MDD), generalized anxiety disorder (GAD), and people at clinically high risk for feeding and eating disorders (CHR-FED). After a spell of four to eight weeks, participants reported positive results and rated the AI chatbot's assistance as "comparable to that of human therapists." For people at risk of eating disorders, the bot helped with approximately a 19% reduction in harmful thoughts about body image and weight issues. Likewise, the figures for generalized anxiety went down by 31% after interacting with the Therabot app. Users who engaged with the Therabot app exhibited "significantly greater" improvement in symptoms of depression, alongside a reduction in signs of anxiety. The findings of the clinical trial have been published in the March edition of the New England Journal of Medicine - Artificial Intelligence (NEJM AI). "After eight weeks, all participants using Therabot experienced a marked reduction in symptoms that exceed what clinicians consider statistically significant," the experts claim, adding that the improvements are comparable to gold-standard cognitive therapy. Solving the access problem "There is no replacement for in-person care, but there are nowhere near enough providers to go around," Jacobson says. He added that there is a lot of scope for in-person and AI-driven assistance to come together and help. Jacobson, who is also the senior author of the study, highlights that AI could improve access to critical help for the vast number of people who can't access in-person healthcare systems. Micheal Heinz, an assistant professor at the Geisel School of Medicine at Dartmouth and lead author of the study, also stressed that tools like Therabot can provide critical assistance in real-time. It essentially goes wherever users go, and most importantly, it boosts patient engagement with a therapeutic tool. Both the experts, however, raised the risks that come with generative AI, especially in high-stakes situations. Late in 2024, a lawsuit was filed against Character.AI over an incident involving the death of a 14-year-old boy, who was reportedly told to kill himself by an AI chatbot. Google's Gemini AI chatbot also advised a user that they should die. "This is for you, human. You and only you. You are not special, you are not important, and you are not needed," said the chatbot, which is also known to fumble something as simple as the current year and occasionally gives harmful tips like adding glue to pizza. When it comes to mental health counseling, the margin for error gets smaller. The experts behind the latest study are aware of it, especially for individuals at risk of self-harm. As such, they recommend vigilance over the development of such tools and prompt human intervention to fine-tune the responses offered by AI therapists.
Share
Share
Copy Link
Dartmouth researchers conduct the first clinical trial of an AI-powered therapy chatbot, Therabot, demonstrating significant improvements in depression, anxiety, and eating disorder symptoms.
Researchers at Dartmouth College's Geisel School of Medicine have conducted the first clinical trial of a generative AI-powered therapy chatbot, yielding promising results for mental health treatment. The study, published in the New England Journal of Medicine AI, demonstrates that the AI system, named Therabot, can provide significant benefits comparable to traditional outpatient therapy 12.
The trial involved 210 participants diagnosed with major depressive disorder, generalized anxiety disorder, or at risk for eating disorders. Half of the participants were given access to Therabot, while the other half served as a control group 3. Users interacted with Therabot through a smartphone app, responding to prompts and initiating conversations when needed, averaging about 10 messages per day 1.
The results of the eight-week trial were remarkable:
These improvements were clinically significant and comparable to outcomes seen in traditional outpatient therapy 5.
The journey to create Therabot began in 2019, with researchers exploring various approaches to train the AI model:
While the results are promising, the researchers emphasize that AI-powered therapy still requires clinical oversight. Therabot includes safety features such as prompts to contact emergency services when it detects high-risk content like suicidal ideation 45.
Dr. Michael Heinz, the study's first author, cautions that "no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter" 3.
The development of Therabot addresses a critical shortage in mental health care providers. With an average of 1,600 patients with depression or anxiety for every available provider in the United States, AI-assisted therapy could help bridge this gap 34.
Dr. Nicholas Jacobson, the study's senior author, envisions a future where "generative AI help[s] provide mental health support to the huge number of people outside the in-person care system" 3.
As the field of AI-assisted therapy evolves, several key areas require attention:
The success of Therabot highlights the potential of AI in mental health care while underscoring the need for continued research and careful implementation to ensure safe and effective treatment for those in need.
Reference
[1]
[2]
[4]
[5]
Medical Xpress - Medical and Health News
|First clinical trial of an AI therapy chatbot yields significant mental health benefitsThe American Psychological Association warns about the dangers of AI chatbots masquerading as therapists, citing cases of harm to vulnerable users and calling for regulatory action.
4 Sources
4 Sources
A groundbreaking study reveals that ChatGPT's responses in couple's therapy scenarios are rated higher than those of human therapists, raising questions about AI's potential role in mental health care.
2 Sources
2 Sources
A study reveals that AI language models like ChatGPT can experience elevated 'anxiety' levels when exposed to traumatic narratives, but these levels can be reduced through mindfulness exercises.
5 Sources
5 Sources
A recent study reveals that AI chatbots like ChatGPT exhibit signs of 'anxiety' when exposed to distressing content, raising questions about their use in mental health support and the need for ethical considerations in AI development.
3 Sources
3 Sources
AI-powered mental health tools are attracting significant investment as they promise to address therapist shortages, reduce burnout, and improve access to care. However, questions remain about AI's ability to replicate human empathy in therapy.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved