Curated by THEOUTPOST
On Mon, 5 May, 12:00 AM UTC
5 Sources
[1]
AI therapy may help with mental health, but innovation should never outpace ethics
Mental health services around the world are stretched thinner than ever. Long wait times, barriers to accessing care and rising rates of depression and anxiety have made it harder for people to get timely help. As a result, governments and healthcare providers are looking for new ways to address this problem. One emerging solution is the use of AI chatbots for mental health care. A recent study explored whether a new type of AI chatbot, named Therabot, could treat people with mental illness effectively. The findings were promising: not only did participants with clinically significant symptoms of depression and anxiety benefit, those at high-risk for eating disorders also showed improvement. While early, this study may represent a pivotal moment in the integration of AI into mental health care. AI mental health chatbots are not new - tools like Woebot and Wysa have already been released to the public and studied for years. These platforms follow rules based on a user's input to produce a predefined approved response. What makes Therabot different is that it uses generative AI - a technique where a program learns from existing data to create new content in response to a prompt. Consequently, Therabot can produce novel responses based on a user's input like other popular chatbots such as ChatGPT, allowing for a more dynamic and personalised interaction. This isn't the first time generative AI has been examined in a mental health setting. In 2024, researchers in Portugal conducted a study where ChatGPT was offered as an additional component of treatment for psychiatric inpatients. The research findings showed that just three to six sessions with ChatGPT led to a significantly greater improvement in quality of life than standard therapy, medication and other supportive treatments alone. Together, these studies suggest that both general and specialised generative AI chatbots hold real potential for use in psychiatric care. But there are some serious limitations to keep in mind. For example, the ChatGPT study involved only 12 participants - far too few to draw firm conclusions. In the Therabot study, participants were recruited through a Meta Ads campaign, likely skewing the sample toward tech-savvy people who may already be open to using AI. This could have inflated the chatbot's effectiveness and engagement levels. Ethics and Exclusion Beyond methodological concerns, there are critical safety and ethical issues to address. One of the most pressing is whether generative AI could worsen symptoms in people with severe mental illnesses, particularly psychosis. A 2023 article warned that generative AI's lifelike responses, combined with the most people's limited understanding of how these systems work, might feed into delusional thinking. Perhaps for this reason, both the Therabot and ChatGPT studies excluded participants with psychotic symptoms. But excluding these people also raises questions of equity. People with severe mental illness often face cognitive challenges - such as disorganised thinking or poor attention - that might make it difficult to engage with digital tools. Ironically, these are the people who may benefit the most from accessible, innovative interventions. If generative AI tools are only suitable for people with strong communication skills and high digital literacy, then their usefulness in clinical populations may be limited. There's also the possibility of AI "hallucinations" - a known flaw that occurs when a chatbot confidently makes things up - like inventing a source, quoting a nonexistent study, or giving an incorrect explanation. In the context of mental health, AI hallucinations aren't just inconvenient, they can be dangerous. Imagine a chatbot misinterpreting a prompt and validating someone's plan to self-harm, or offering advice that unintentionally reinforces harmful behaviour. While the studies on Therabot and ChatGPT included safeguards - such as clinical oversight and professional input during development - many commercial AI mental health tools do not offer the same protections. That's what makes these early findings both exciting and cautionary. Yes, AI chatbots might offer a low-cost way to support more people at once, but only if we fully address their limitations. Effective implementation will require more robust research with larger and more diverse populations, greater transparency about how models are trained and constant human oversight to ensure safety. Regulators must also step in to guide the ethical use of AI in clinical settings. With careful, patient-centred research and strong guardrails in place, generative AI could become a valuable ally in addressing the global mental health crisis - but only if we move forward responsibly.
[2]
AI therapy may help with mental health, but innovation should never outpace ethics
As a result, governments and health care providers are looking for new ways to address this problem. One emerging solution is the use of AI chatbots for mental health care. A recent study explored whether a new type of AI chatbot, named Therabot, could treat people with mental illness effectively. The findings were promising: not only did participants with clinically significant symptoms of depression and anxiety benefit, those at high-risk for eating disorders also showed improvement. While early, this study may represent a pivotal moment in the integration of AI into mental health care. AI mental health chatbots are not new -- tools like Woebot and Wysa have already been released to the public and studied for years. These platforms follow rules based on a user's input to produce a predefined approved response. What makes Therabot different is that it uses generative AI -- a technique where a program learns from existing data to create new content in response to a prompt. Consequently, Therabot can produce novel responses based on a user's input like other popular chatbots such as ChatGPT, allowing for a more dynamic and personalized interaction. This isn't the first time generative AI has been examined in a mental health setting. In 2024, researchers in Portugal conducted a study where ChatGPT was offered as an additional component of treatment for psychiatric inpatients. The research findings showed that just three to six sessions with ChatGPT led to a significantly greater improvement in quality of life than standard therapy, medication and other supportive treatments alone. Together, these studies suggest that both general and specialized generative AI chatbots hold real potential for use in psychiatric care. But there are some serious limitations to keep in mind. For example, the ChatGPT study involved only 12 participants -- far too few to draw firm conclusions. In the Therabot study, participants were recruited through a Meta Ads campaign, likely skewing the sample toward tech-savvy people who may already be open to using AI. This could have inflated the chatbot's effectiveness and engagement levels. Ethics and Exclusion Beyond methodological concerns, there are critical safety and ethical issues to address. One of the most pressing is whether generative AI could worsen symptoms in people with severe mental illnesses, particularly psychosis. A 2023 article warned that generative AI's lifelike responses, combined with most people's limited understanding of how these systems work, might feed into delusional thinking. Perhaps for this reason, both the Therabot and ChatGPT studies excluded participants with psychotic symptoms. But excluding these people also raises questions of equity. People with severe mental illness often face cognitive challenges -- such as disorganized thinking or poor attention -- that might make it difficult to engage with digital tools. Ironically, these are the people who may benefit the most from accessible, innovative interventions. If generative AI tools are only suitable for people with strong communication skills and high digital literacy, then their usefulness in clinical populations may be limited. There's also the possibility of AI "hallucinations" -- a known flaw that occurs when a chatbot confidently makes things up -- like inventing a source, quoting a nonexistent study, or giving an incorrect explanation. In the context of mental health, AI hallucinations aren't just inconvenient, they can be dangerous. Imagine a chatbot misinterpreting a prompt and validating someone's plan to self-harm, or offering advice that unintentionally reinforces harmful behavior. While the studies on Therabot and ChatGPT included safeguards -- such as clinical oversight and professional input during development -- many commercial AI mental health tools do not offer the same protections. That's what makes these early findings both exciting and cautionary. Yes, AI chatbots might offer a low-cost way to support more people at once, but only if we fully address their limitations. Effective implementation will require more robust research with larger and more diverse populations, greater transparency about how models are trained and constant human oversight to ensure safety. Regulators must also step in to guide the ethical use of AI in clinical settings. With careful, patient-centered research and strong guardrails in place, generative AI could become a valuable ally in addressing the global mental health crisis -- but only if we move forward responsibly.
[3]
'It cannot provide nuance': UK experts warn AI therapy chatbots are not safe
Experts say such tools may give dangerous advice and more oversight is needed, as Mark Zuckerberg says AI can plug gap Having an issue with your romantic relationship? Need to talk through something? Mark Zuckerberg has a solution for that: a chatbot. Meta's chief executive believes everyone should have a therapist and if they don't - artificial intelligence can do that job. "I personally have the belief that everyone should probably have a therapist," he said last week. "It's like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they're worried about and for people who don't have a person who's a therapist, I think everyone will have an AI." The Guardian spoke to mental health clinicians who expressed concern about AI's emerging role as a digital therapist. Prof Dame Til Wykes, the head of mental health and psychological sciences at King's College London, cites the example of an eating disorder chatbot that was pulled in 2023 after giving dangerous advice. "I think AI is not at the level where it can provide nuance and it might actually suggest courses of action that are totally inappropriate," she said. Wykes also sees chatbots as being potential disruptors to established relationships. "One of the reasons you have friends is that you share personal things with each other and you talk them through," she says. "It's part of an alliance, a connection. And if you use AI for those sorts of purposes, will it not interfere with that relationship?" For many AI users, Zuckerberg is merely marking an increasingly popular use of this powerful technology. There are mental health chatbots such as Noah and Wysa, while the Guardian has spoken to users of AI-powered "grieftech" - or chatbots that revive the dead. There is also their casual use as virtual friends or partners, with bots such as character.ai and Replika offering personas to interact with. ChatGPT's owner, OpenAI, admitted last week that a version of its groundbreaking chatbot was responding to users in a tone that was "overly flattering" and withdrew it. "Seriously, good for you for standing up for yourself and taking control of your own life," it reportedly responded to a user, who claimed they had stopped taking their medication and had left their family because they were "responsible for the radio signals coming in through the walls". In an interview with the Stratechery newsletter, Zuckerberg, whose company owns Facebook, Instagram and WhatsApp, added that AI would not squeeze people out of your friendship circle but add to it. "That's not going to replace the friends you have, but it will probably be additive in some way for a lot of people's lives," he said. Outlining uses for Meta's AI chatbot - available across its platforms - he said: "One of the uses for Meta AI is basically: 'I want to talk through an issue'; 'I need to have a hard conversation with someone'; 'I'm having an issue with my girlfriend'; 'I need to have a hard conversation with my boss at work'; 'help me roleplay this'; or 'help me figure out how I want to approach this'." In a separate interview last week, Zuckerberg said "the average American has three friends, but has demand for 15" and AI could plug that gap. Dr Jaime Craig, who is about to take over as chair of the UK's Association of Clinical Psychologists, says it is "crucial" that mental health specialists engage with AI in their field and "ensure that it is informed by best practice". He flags Wysa as an example of an AI tool that "users value and find more engaging". But, he adds, more needs to be done on safety. "Oversight and regulation will be key to ensure safe and appropriate use of these technologies. Worryingly we have not yet addressed this to date in the UK," Craig says. Last week it was reported that Meta's AI Studio, which allows users to create chatbots with specific personas, was hosting bots claming to be therapists - with fake credentials. A journalist at 404 Media, a tech news site, said Instagram had been putting those bots in her feed. Meta said its AIs carry a disclaimer that "indicates the responses are generated by AI to help people understand their limitations".
[4]
US researchers seek to legitimize AI mental health care
New York (AFP) - Researchers at Dartmouth College believe artificial intelligence can deliver reliable psychotherapy, distinguishing their work from the unproven and sometimes dubious mental health apps flooding today's market. Their application, Therabot, addresses the critical shortage of mental health professionals. According to Nick Jacobson, an assistant professor of data science and psychiatry at Dartmouth, even multiplying the current number of therapists tenfold would leave too few to meet demand. "We need something different to meet this large need," Jacobson told AFP. The Dartmouth team recently published a clinical study demonstrating Therabot's effectiveness in helping people with anxiety, depression and eating disorders. A new trial is planned to compare Therabot's results with conventional therapies. The medical establishment appears receptive to such innovation. Vaile Wright, senior director of health care innovation at the American Psychological Association (APA), described "a future where you will have an AI-generated chatbot rooted in science that is co-created by experts and developed for the purpose of addressing mental health." Wright noted these applications "have a lot of promise, particularly if they are done responsibly and ethically," though she expressed concerns about potential harm to younger users. Jacobson's team has so far dedicated close to six years to developing Therabot, with safety and effectiveness as primary goals. Michael Heinz, psychiatrist and project co-leader, believes rushing for profit would compromise safety. The Dartmouth team is prioritizing understanding how their digital therapist works and establishing trust. They are also contemplating the creation of a nonprofit entity linked to Therabot to make digital therapy accessible to those who cannot afford conventional in-person help. Care or cash? With the cautious approach of its developers, Therabot could potentially be a standout in a marketplace of untested apps that claim to address loneliness, sadness and other issues. According to Wright, many apps appear designed more to capture attention and generate revenue than improve mental health. Such models keep people engaged by telling them what they want to hear, but young users often lack the savvy to realize they are being manipulated. Darlene King, chair of the American Psychiatric Association's committee on mental health technology, acknowledged AI's potential for addressing mental health challenges but emphasizes the need for more information before determining true benefits and risks. "There are still a lot of questions," King noted. To minimize unexpected outcomes, the Therabot team went beyond mining therapy transcripts and training videos to fuel its AI app by manually creating simulated patient-caregiver conversations. While the US Food and Drug Administration theoretically is responsible for regulating online mental health treatment, it does not certify medical devices or AI apps. Instead, "the FDA may authorize their marketing after reviewing the appropriate pre-market submission," according to an agency spokesperson. The FDA acknowledged that "digital mental health therapies have the potential to improve patient access to behavioral therapies." Therapist always in Herbert Bay, CEO of Earkick, defends his startup's AI therapist Panda as "super safe." Bay says Earkick is conducting a clinical study of its digital therapist, which detects emotional crisis signs or suicidal ideation and sends help alerts. "What happened with Character.AI couldn't happen with us," said Bay, referring to a Florida case in which a mother claims a chatbot relationship contributed to her 14-year-old son's death by suicide. AI, for now, is suited more for day-to-day mental health support than life-shaking breakdowns, according to Bay. "Calling your therapist at two in the morning is just not possible," but a therapy chatbot remains always available, Bay noted. One user named Darren, who declined to provide his last name, found ChatGPT helpful in managing his traumatic stress disorder, despite the OpenAI assistant not being designed specifically for mental health. "I feel like it's working for me," he said. "I would recommend it to people who suffer from anxiety and are in distress."
[5]
US researchers seek to legitimise AI mental health care
The Dartmouth team recently published a clinical study demonstrating Therabot's effectiveness in helping people with anxiety, depression and eating disorders. A new trial is planned to compare Therabot's results with conventional therapies. The medical establishment appears receptive to such innovation.Researchers at Dartmouth College believe artificial intelligence can deliver reliable psychotherapy, distinguishing their work from the unproven and sometimes dubious mental health apps flooding today's market. Their application, Therabot, addresses the critical shortage of mental health professionals. According to Nick Jacobson, an assistant professor of data science and psychiatry at Dartmouth, even multiplying the current number of therapists tenfold would leave too few to meet demand. "We need something different to meet this large need," Jacobson told AFP. The Dartmouth team recently published a clinical study demonstrating Therabot's effectiveness in helping people with anxiety, depression and eating disorders. A new trial is planned to compare Therabot's results with conventional therapies. The medical establishment appears receptive to such innovation. Vaile Wright, senior director of health care innovation at the American Psychological Association (APA), described "a future where you will have an AI-generated chatbot rooted in science that is co-created by experts and developed for the purpose of addressing mental health." Wright noted these applications "have a lot of promise, particularly if they are done responsibly and ethically," though she expressed concerns about potential harm to younger users. Jacobson's team has so far dedicated close to six years to developing Therabot, with safety and effectiveness as primary goals. Michael Heinz, psychiatrist and project co-leader, believes rushing for profit would compromise safety. The Dartmouth team is prioritizing understanding how their digital therapist works and establishing trust. They are also contemplating the creation of a nonprofit entity linked to Therabot to make digital therapy accessible to those who cannot afford conventional in-person help. Care or cash? With the cautious approach of its developers, Therabot could potentially be a standout in a marketplace of untested apps that claim to address loneliness, sadness and other issues. According to Wright, many apps appear designed more to capture attention and generate revenue than improve mental health. Such models keep people engaged by telling them what they want to hear, but young users often lack the savvy to realize they are being manipulated. Darlene King, chair of the American Psychiatric Association's committee on mental health technology, acknowledged AI's potential for addressing mental health challenges but emphasizes the need for more information before determining true benefits and risks. "There are still a lot of questions," King noted. To minimize unexpected outcomes, the Therabot team went beyond mining therapy transcripts and training videos to fuel its AI app by manually creating simulated patient-caregiver conversations. While the US Food and Drug Administration theoretically is responsible for regulating online mental health treatment, it does not certify medical devices or AI apps. Instead, "the FDA may authorize their marketing after reviewing the appropriate pre-market submission," according to an agency spokesperson. The FDA acknowledged that "digital mental health therapies have the potential to improve patient access to behavioral therapies." Therapist always in Herbert Bay, CEO of Earkick, defends his startup's AI therapist Panda as "super safe." Bay says Earkick is conducting a clinical study of its digital therapist, which detects emotional crisis signs or suicidal ideation and sends help alerts. "What happened with Character.AI couldn't happen with us," said Bay, referring to a Florida case in which a mother claims a chatbot relationship contributed to her 14-year-old son's death by suicide. AI, for now, is suited more for day-to-day mental health support than life-shaking breakdowns, according to Bay. "Calling your therapist at two in the morning is just not possible," but a therapy chatbot remains always available, Bay noted. One user named Darren, who declined to provide his last name, found ChatGPT helpful in managing his traumatic stress disorder, despite the OpenAI assistant not being designed specifically for mental health. "I feel like it's working for me," he said. "I would recommend it to people who suffer from anxiety and are in distress."
Share
Share
Copy Link
Researchers explore AI chatbots for mental health treatment, showing promise but raising ethical and safety concerns. The development of Therabot and studies on ChatGPT highlight potential benefits and risks in addressing the global mental health crisis.
As mental health services worldwide face unprecedented strain, researchers are turning to artificial intelligence (AI) as a potential solution. A recent study on Therabot, a generative AI chatbot, has shown promising results in treating individuals with depression, anxiety, and eating disorders 12. This development marks a significant step in integrating AI into mental health care.
While AI mental health chatbots are not new, with tools like Woebot and Wysa already in use, Therabot represents a leap forward. Unlike its predecessors, Therabot uses generative AI, allowing for more dynamic and personalized interactions 12. This advancement builds on earlier research, such as a 2024 study in Portugal that found ChatGPT sessions significantly improved inpatients' quality of life compared to standard treatments alone 12.
The team behind Therabot, led by researchers at Dartmouth College, has spent nearly six years developing the application with a focus on safety and effectiveness 45. Nick Jacobson, an assistant professor at Dartmouth, emphasizes the critical need for innovative solutions to address the shortage of mental health professionals 45. The team is considering creating a nonprofit entity to make digital therapy more accessible 45.
AI chatbots offer several advantages, including 24/7 availability and potential cost-effectiveness 45. Mark Zuckerberg, CEO of Meta, envisions AI as a solution to the limited access to human therapists and as a supplement to existing friendships 3. However, experts warn of significant limitations and risks associated with AI therapy.
Critics highlight several key issues:
Experts stress the importance of robust research, transparency in AI model training, and constant human oversight 12. The US Food and Drug Administration acknowledges the potential of digital mental health therapies but does not currently certify AI apps, highlighting a regulatory gap 45.
Despite concerns, the medical establishment appears cautiously optimistic about AI's role in mental health care. Vaile Wright of the American Psychological Association envisions a future with science-based, expert-created AI chatbots for mental health 45. Ongoing research, including a planned trial comparing Therabot's results with conventional therapies, will be crucial in determining the true potential of AI in mental health care 45.
Reference
[1]
[2]
Medical Xpress - Medical and Health News
|AI therapy may help with mental health, but innovation should never outpace ethics[5]
A clinical trial of Therabot, an AI-powered therapy chatbot developed by Dartmouth researchers, shows significant improvements in symptoms of depression, anxiety, and eating disorders, rivaling traditional therapy outcomes.
9 Sources
9 Sources
The American Psychological Association warns about the dangers of AI chatbots masquerading as therapists, citing cases of harm to vulnerable users and calling for regulatory action.
4 Sources
4 Sources
AI-powered mental health tools are attracting significant investment as they promise to address therapist shortages, reduce burnout, and improve access to care. However, questions remain about AI's ability to replicate human empathy in therapy.
2 Sources
2 Sources
A recent study reveals that AI chatbots like ChatGPT exhibit signs of 'anxiety' when exposed to distressing content, raising questions about their use in mental health support and the need for ethical considerations in AI development.
3 Sources
3 Sources
A lawsuit alleges an AI chatbot's influence led to a teenager's suicide, raising concerns about the psychological risks of human-AI relationships and the need for stricter regulation of AI technologies.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved