5 Sources
[1]
Can AI Fix the Mental Health Crisis?
Mental health systems worldwide are under mounting strain. Across health systems, demand for psychological support is growing faster than the workforce available to deliver it. The World Health Organization estimates that depression alone affects more than 300 million people globally, yet timely access to mental health services remains uneven. In many regions, patients face months-long waits for therapy or psychiatric consultation. In others, services remain scarce or difficult to access. Even when care is available, stigma often delays help-seeking until symptoms become severe. AI-driven mental health platforms are rapidly expanding to address this gap. From conversational therapy chatbots to predictive algorithms designed to identify suicide risk, these technologies promise something traditional care has long struggled to deliver: continuous, scalable support. The idea is undeniably compelling. If AI can help monitor symptoms, extend therapeutic reach, or detect deterioration earlier than clinicians on their own, it could reshape how mental health services are delivered. Consider a patient newly diagnosed with moderate depression who faces a 3-month wait for psychotherapy. During this period, symptoms may worsen, motivation may decline, and opportunities for early intervention may be lost. A digital cognitive-behavioral therapy application or AI-supported chatbot could provide mood tracking, structured exercises, and coping strategies while the patient waits for formal care. For some patients, such tools may serve as a temporary bridge until clinician-led therapy is available. Still, when technology begins to participate in emotional support and psychological assessment, the stakes are uniquely high, moving the discussion quickly beyond innovation to questions of safety, trust, and clinical responsibility. How AI Is Entering Mental Healthcare AI applications in mental health are built on a simple principle: Patterns in human behavior and language often reflect changes in emotional state. Modern systems analyze a range of signals: speech patterns, typing behavior, wearable sensor data, and patient-reported outcomes to detect mood shifts, deliver digital interventions, or flag emerging risk. Several categories of tools are now appearing in consumer and clinical settings. Conversational agents, often referred to as therapy chatbots, use natural language processing to simulate supportive dialogue and guide users through structured psychological exercises. Applications such as Woebot and Wysa are largely based on cognitive behavioral therapy frameworks, encouraging users to track mood patterns, identify cognitive distortions, and practice coping strategies through brief interactive exchanges. Digital therapeutics platforms deliver structured, evidence-based therapy modules through interactive applications. Programs such as reSET, reSET-O, and Sleepio illustrate how these tools can translate validated psychological interventions into scalable digital formats. These platforms monitor user engagement and dynamically adapt therapeutic content based on individual responses, symptom patterns, and progress over time, enabling a more personalized and responsive treatment experience between clinical encounters. Predictive analytics involves machine-learning models that analyze longitudinal data from clinical records, behavioral patterns, and patient-reported measures to identify individuals at risk for relapse, symptom worsening, or suicidal ideation. Health service deliverers, such as Kaiser Permanente, have developed predictive models using electronic health record data to identify patients at elevated risk of suicide, although implementation remains cautious and closely monitored. Emerging platforms, such as Ellipsis Health, are exploring signals including vocal characteristics and smartphone interaction patterns as potential indicators of mental health status. While these approaches are promising, their clinical utility and generalizability are still being evaluated. Behavioral monitoring applications rely on passive data collected from smartphones or wearable devices. Changes in sleep patterns, mobility, or communication behavior may signal subtle shifts in mental health that traditional clinical appointments might miss. Research platforms, such as Harvard T.H. Chan School of Public Health's Beiwe Service Center, use data including movement patterns, phone usage, and social activity as proxies for behavioral change. Consumer devices, like Fitbit and Apple, can similarly provide sleep and activity data that may complement clinical assessment. These signals are not diagnostic but may offer early indications of change when interpreted within a broader clinical context. Despite promising early results, the evidence base for AI in mental health remains limited. Most studies involve limited populations, short follow-up periods, and sometimes industry-sponsored designs. Evidence is particularly limited for severe mental illness, comorbid psychiatric conditions, diverse cultural groups, and long-term patient outcomes. This does not negate the potential value of these tools, but it does mean that their effectiveness, safety, and generalizability are not yet fully established. As a result, clinicians may encounter variability in performance across different patient populations, and unintended consequences such as missed risk signals or overreliance on automated outputs cannot be fully excluded. Clinicians should therefore interpret current evidence cautiously and consider these tools as adjuncts to, rather than substitutes for, established clinical care until stronger validation is available. The Promise: Why AI Has Generated Interest Despite these uncertainties, AI has attracted considerable interest across the mental health field. Accessibility is a major advantage. Digital platforms can provide support at any time, reaching individuals who might otherwise have limited access to clinicians, particularly those in rural areas or underserved communities. Scalability is another benefit. Unlike traditional therapy models, digital systems can potentially support large populations without proportionally increasing clinician workload. In overstretched health systems, this capability is particularly appealing. Data-driven personalization allows continuous symptom tracking, revealing behavioral patterns that may enable interventions tailored to an individual's needs and mental health trajectory. Earlier intervention may also become possible. Algorithms capable of detecting subtle behavioral changes can provide warning signs of deterioration before patients present clinically. For clinicians managing rising demand with limited resources, these capabilities suggest a role for AI as a supporting layer that extends clinical reach. Pandora's Box: Limits and Ethical Questions However, mental healthcare presents complexities that algorithms may struggle to navigate. Psychiatric evaluation often depends on nuance, context, and human empathy qualities that remain difficult for current AI systems to replicate. Emotional expression, cultural differences, and comorbid conditions can be easily misinterpreted by models trained on limited or biased datasets. Privacy concerns are particularly acute. Mental health data represent one of the most sensitive categories of personal information, yet users may not always understand how these data are stored, analyzed, or shared. Regulatory oversight remains limited. Many AI-based mental health applications operate outside traditional frameworks, leaving clinicians and health systems to assess safety and validity before recommending their use. Integration into clinical workflows is also challenging. AI-generated insights must be interpreted, contextualized, and incorporated without overwhelming already-burdened clinical teams. Without careful oversight, reliance on AI could inadvertently delay human intervention or create false reassurance in urgent situations. What Clinicians Can Do Now For clinicians, thoughtful stewardship is essential. Evidence: Evaluate digital mental health tools with the same rigor applied to other clinical interventions, looking beyond promotional claims to peer-reviewed research and independent validation. Clinical integration: Maintain a curated list of vetted applications, incorporate digital symptom tracking into follow-up visits, and discuss technology use openly with patients. Assess whether new tools can be integrated into existing workflows. Privacy and transparency: Understand where patient data are stored, how algorithms analyze them, and whether patients are clearly informed. Patient suitability: Is the patient comfortable using digital tools and able to engage consistently? Safety: Does the platform include safeguards for crisis situations and clear escalation pathways to human clinicians? Framing expectations: AI tools may support monitoring, deliver structured exercises, or encourage reflection between appointments, but they do not replace professional care. Monitoring outcomes: Track engagement patterns, symptom changes, and unintended consequences over time, as you would with medications or psychotherapy. The digital mental health era has already begun. The challenge is not whether AI will enter mental healthcare, but how thoughtfully it is integrated into practice. AI is a tool, and its impact will depend on the judgment and clinical stewardship guiding its use. Nelly Abulata, MD, PhD, MBA, PgDipTQMH, is a physician, educator, and global health innovation strategist. She is a professor of hematology at the department of clinical and chemical pathology, Kasr Al-Ainy Medical School & University Teaching Hospital, Cairo University, Egypt. Abulata has held leadership roles on numerous boards, advisory committees, and working groups where she has made significant contributions to the advancement of healthcare and higher education.
[2]
Study explores role of AI automation in psychotherapy practice
University of UtahApr 6 2026 Psychotherapy has always been a deeply human endeavor: a patient talking, a therapist listening and responding, and healing happening through words. But with the rapid rise of conversational artificial intelligence, particularly large language models (LLMs), that paradigm is shifting fast. A team of University of Utah researchers is tackling this change, but not by asking, "Will robots replace therapists?" Rather, they explore more practical questions: What are we automating and how much? "The history of new technology like this is almost always about collaboration, and it's about how it supports the human expert in doing the work they can do," said Zac Imel, a professor of educational psychology and lead author of a new study titled "A Framework for Automation in Psychotherapy." "It might be useful to think about frameworks for understanding the different types of work that could be done through automation, and that's what this paper is." The study is the result of a cross-campus collaboration among researchers from the U's College of Engineering, School of Medicine and College of Education. Simply put, automation is when machines perform tasks humans have previously done. In therapy, that could range from a chatbot delivering prewritten coping tips to AI systems that take and organize notes, analyze therapy sessions and provide feedback to clinicians, or even talk directly to patients. Varying degrees of automation Co-author Vivek Srikumar uses self-driving cars as an analogy for the varying levels of automation. "The automobile industry has been introducing driver assistance systems in our cars for many years now, and the extreme end is self-driving cars," said Srikumar, an associate professor at the Kahlert School of Computing. "This paper can be seen from that perspective. The extreme version of AI in psychotherapy is an AI therapist, but there are different levels of automation that might be associated with different amounts of risk. You might have different capabilities or assistance that is provided to therapists, to clients, to organizations by AI." Imel and Srikumar are long-time collaborators who teamed up with Brent Kious, an associate professor of psychiatry, to craft the automation framework, which was posted in advance of publication by Current Directions in Psychological Science. The team outlined four categories, representing different levels of automation along a continuum. Category A: Scripted systems. Content is prewritten by humans, but provided to patients by chatbots that follow decision trees. Category B: AI evaluates therapists. The AI reviews therapy sessions and gives feedback or ratings. Category C: AI assists therapists. The AI suggests interventions, prompts, or phrasing, but a human therapist delivers care. Category D: AI provides therapy directly. An autonomous agent generates responses and interacts with patients, possibly with supervision. The team evaluated each category for its potential utility and risk levels, which vary widely. A scripted chatbot, an AI coaching tool for therapists, and a fully autonomous AI therapist are fundamentally different technologies with different risks. However, it's often not clear to users, or even health systems, which technology they are using. Weighing risks and benefits "By cataloging the various levels of automation, the same question takes on different flavors at various levels, questions about risk, questions about consent, who gets to consent and how much consent and the impact of potential mistakes and the questions about who and how much responsibility is borne by various parties," Srikumar said. "All of these things, the questions remain the same, but the impact of these questions changes." The team is particularly interested in improving the way clinicians are evaluated and mentored to improve the level of care provided to patients. "We are currently partnering with SafeUT, Utah's statewide text-based crisis line, to develop tools that help evaluate crisis counselors' sessions so that they can get feedback to maintain key skills and even develop new ones as we learn more about crisis counseling," Kious said. Evaluation and training are where large language models can support therapists without coming close to replacing them, Imel said. Current methods are no match to the scale of need in mental health care. Automating without replacing human therapists "To evaluate a psychotherapy session is tremendously labor-intensive. It's slow, it's unreliable, it rarely gets used," Imel said. "You're not recording your sessions and then mailing them off to an expert who can listen to them and evaluate them and give you feedback and then send it back to you so you can learn from it." Here, appropriately trained LLMs can quickly capture core components of treatment and provide that information back to therapists quickly-often in real time. The researchers note that anyone can now turn to ChatGPT for counseling that might resemble psychotherapy. LLMs are designed to be engaging and sound empathetic, and are trained on vast datasets, but they don't necessarily use evidence-based psychotherapy techniques. Accordingly, they carry huge risks since they are known to fabricate information, encode biases and respond unpredictably. "Why would one want to deploy the riskiest version of a tool when there are so many lighter versions of it that we can already deploy that are going to make life easier?" Srikumar said. "A note-taking application, for example, something that maintains notes across a session. These are already going to improve the quality of life for clinicians, the quality of service." The team also envisions a role for AI in crisis hotlines someday. "It's a really challenging environment where you don't know anything about the people you're talking to. They're calling in, you may only have five or six talk turns to connect with them. You have a very confined space to try and help this person and get them safe and reduce risk," Srikumar said. "What I do foresee is that future crisis counseling systems will be heavily augmented by AI because the scale is too big to be satisfied without automation." University of Utah Journal reference: DOI: 10.1177/09637214251386047
[3]
'How are you using AI?' Your therapist should ask you that question, experts argue
ChatGPT, Claude and Character.AI are chatbots powered by artificial intelligence that people are using increasingly. Kiichiro Sato/AP hide caption Increasingly, teens and adults are turning to artificial intelligence chatbots for companionship and emotional support, recent studies and surveys show. And so, mental health care providers should inquire if and how their patients are using this technology, just like they seek information on sleep, diet, exercise and alcohol consumption. That's according to a new paper out in JAMA Psychiatry. "We're not saying that AI use is good or bad," says Shaddy Saba, an assistant professor at New York University's Silver School of Social Work, "just like we wouldn't say, substance use is necessarily good or bad, [or] consulting with a friend about something is good or bad." However, learning about a person's use of AI for emotional support and advice could provide valuable insight into someone's life and mental health status, he says. "Our job is to understand why people are behaving as they are -- in this case, why they are seeking help from an AI system," adds Saba. "And to learn about what it's doing for them, what it's not doing for them." Saba and his co-author's recommendations are "very aligned" with recommendations by the American Psychological Association (APA) in a health advisory released in November of last year, says the APA's Vaile Wright. Asking what a patient is getting out of their conversations with an AI chatbot sets "a foundation for the therapist to better know how they are trying to navigate their emotional wellbeing and their mental illness," says Wright. "People are using these tools on a regular basis to ask about how to cope with stressful experiences, personal relationship challenges," explains Saba. And some are using chatbots for advice on how to cope with symptoms of anxiety and depression. "To the extent that we can prompt our clients to bring these conversations, in increasing detail, even into the therapy room, I think there's potentially a treasure trove of information," he says. It could be information about the main causes of stress in someone's life, or if they are turning to a chatbot as a way to avoid confrontations. "Let's say, for example, you have a client who is having relationship issues with their spouse," says the APA's Wright. "And instead of trying to have open conversations with their spouse about how to get their needs met, they're instead going to the chatbot to either fill those needs or to avoid having these difficult conversations with their spouse." That background will help a therapist better support the patient, she explains. "Helping them understand how to have a safe conversation with their spouse, helping them understand the limitations of AI as a tool for filling those gaps in those needs." Discussing use of AI is also a chance to learn about things a client might not voluntarily share with a therapist, says psychiatrist Dr. Tom Insel, former director of the National Institute of Mental Health. "People often use the chatbots to talk about things that they can't talk about with other people because they're so worried about being judged," he says. For example, suicidal thoughts may be something a patient is reluctant to share with their therapist, but that is critical for the therapist to know to keep the patient safe. When it comes to first broaching the subject with patients, Saba suggests doing it without any judgment. "We don't want to make clients feel like we're judging them," he says. "They're just not going to want to work with us in general if we do that." He recommends therapists approach the topic with genuine curiosity, and offers suggested language for these conversations. "'You know, A.I. is something that's kind of rapidly growing, and I'm hearing from a lot of people that they're using things like ChatGPT for emotional support," he suggests. "'Is that the case for you? Have you tried that?'" He also recommends asking specific questions about what they found helpful so they can better understand how a patient is using these tools. It could also help a therapist figure out if a chatbot can complement therapy in helpful ways, says Insel, such as to vet which topics to bring to their sessions or to vent about day-to-day life. In a way, therapy and chatbots "could be aligned to work together," says Insel. Saba and his co-author, William Weeks also suggest asking patients if they found any chatbot interactions unhelpful or problematic, and also offering to share risks of using chatbots for emotional support. For example, the risks to data privacy, because many AI companies use the conversations - even sensitive ones - to further train their models. There are also risks of treating a chatbot like a therapist, says Insel. Talking with a chatbot about one's mental health is "the opposite of therapy," he says, because chatbots are designed to affirm and flatter, reinforcing users' thoughts and feelings. "Therapy is there to help you change and to challenge you," says Insel, ""and to get you to talk about things that are particularly difficult. Psychologist Cami Winkelspecht has a private practice working primarily with children and adolescents in Wilmington, Del. She has been considering adding questions about social media and AI use to her intake form, and appreciated Saba's study as it offered some sample questions to include. Over the past year or so, Winkelspecht has had a growing number of clients and their parents ask her for help with using AI for brainstorming and other tasks in ways that don't break a school's honor code. So, she's had to familiarize herself with the technology in order to be able to support her clients. Along the way, she's come to realize that therapists and kids' parents need to be more aware of how children and teens are using their digital devices -- both social media and AI chatbots. "We don't necessarily think about what they're doing with their phones quite as much," says Winkelspecht. "And I think it's pretty clear that we need to be doing that more and encouraging ourselves to have that conversation."
[4]
AI in the mental health care workforce is met with fear, pushback -- and enthusiasm
<iframe src="https://www.npr.org/player/embed/nx-s1-5771707/nx-s1-9695614" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player"> Artificial intelligence has arrived in the field of mental health. Large health systems and independent therapists alike have begun to adopt different AI tools to manage the delivery of mental health treatment. The speed of the adoption -- alongside disturbing incidents of individuals using general-use AI chatbots with catastrophic consequences -- is causing some concern among practitioners and researchers. "There is a lot of fear and anxiety about AI," says psychologist Vaile Wright, senior director of health care innovation at the American Psychological Association (APA). "And in particular fear around AI replacing jobs." Those concerns were a key issue last month, when 2,400 mental health care providers for Kaiser Permanente in Northern California and the Central Valley went on a 24-hour strike. One of the therapists who went on strike is Ilana Marcucci-Morris. Since 2019, Marcucci-Morris worked as a triage clinician at Kaiser Permanente's telepsychiatry intake hub. But that changed in May 2025. "I have been reassigned from triage to other duties," says Marcucci-Morris, a licensed clinical social worker based at KP in Oakland, California. The change in her role was driven by KP's efforts to revamp its triage system, she says. "What used to always be a 10 to 15-minute screening from a licensed clinician like myself is now being conducted by unlicensed lay operators following a script," she says. "Or, an E-visit." She and her colleagues worry that this downsizing of the triage system is paving the way for AI to take over their jobs. At Kaiser Permanente in Walnut Creek, California, the triage team of nine providers has been cut to three, says Harimandir Khalsa, a marriage and family therapist, who also works as a triage clinician. "The jobs that we did [are] being handled by these telephone service representatives," says Khalsa. The 24-hour strike on March 18 protested these changes among other things. "Part of our unfair labor practice strike really is about the erosion of licensed triage within the health plan," says Marcucci-Morris. "At Kaiser Permanente, our use of AI does not replace clinical expertise," Lionel Sims, senior vice president of human resources at Kaiser Permanente Northern California, said in a statement to NPR. The health system, which is both a direct care provider and an insurer, confirmed to NPR that it is assessing AI tools from a U.K. company called Limbic. "We are currently evaluating the use of Limbic to assist members in accessing care. Limbic is not in use at this time," the statement reads. "I have not seen within mental health care any jobs be replaced by AI as of yet," says Wright of the American Psychological Association. Instead, she says, the growing adoption of AI in mental health care has been mostly limited to certain kinds of tasks. "One clear positive use case of AI tools is in the use of improving efficiencies around documentation and other automated types of activities," she says. Like billing insurance companies or updating electronic health records -- time consuming tasks that bog therapists down. "Most providers want to help people and when they get mired down with excessive paperwork or documentation in order to get paid, that takes away time from direct patient care," Wright adds. "And so I do think that there are benefits to incorporating these tools into your practice based on your personal comfort level." There are nearly 40 different products with transcription and other "documentation support" services for providers, she says. One such company is Blueprint, an AI assistant that summarizes sessions, updates electronic health records, and helps individual therapists track patient progress. Other companies are building AI tools for large health systems. For example, Limbic has built AI assistants to perform a range of tasks including intake, and patient support for big health systems. "We are deployed across 63% of the U.K.'s National Health Service and we are currently serving patients in 13 U.S. states," says founder and CEO Ross Harper. One Limbic chatbot, called Limbic Care, is trained on cognitive behavioral therapy skills and provides direct patient support. "Let's imagine you're an individual," says Harper. "It's 3 a.m. in the morning on a Wednesday. You can't sleep and you think 'I may actually need some help.'" In such a scenario, a patient can connect right away to Limbic Care on the patient portal. "What Limbic Care would do is it would provide evidence-based cognitive behavioral therapy tools and techniques so that you can really begin working on the challenges that you're experiencing right there and then," says Harper. Despite the growing adoption of AI tools for administrative tasks by health systems and mental health care providers, "we're not seeing a lot of clinical use of AI today," says psychiatrist Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center in Boston. One reason, he says, is that while the AI tools are exciting, "they're not well tested." Also, "it could be very expensive to run these systems," he adds. "You need a large IT team. You need infrastructure. There's safety things that have to go in place." Most small mental health practices and community mental health centers do not have the infrastructure or expertise to use these AI platforms, he says. The APA's Wright agrees. "At this point, because there is little regulation, it is incumbent on the provider to do the legwork and the research to figure out, 'Are the tools that are on the market and available, safe and effective?'" she says. However, Torous predicts that adoption of AI will keep growing as the technology improves. "I think AI is going to transform the future of mental health care for the better," he says. "But we as the clinical community have to learn to use it and work for it. So that means there's going to be a lot more training. We have to upskill ourselves." Refusing to use the technology is no longer an option, he adds. "Because if you take this approach and companies come in with products that may be good, maybe really bad and dangerous, we won't know how to evaluate them." In fact, involving mental health care professionals in the development of AI tools will only help make them better, adds Torous. That's what the striking mental health workers at Kaiser Permanente in northern California and the Central Valley would like to see their employer do -- involve them in the development and rolling out of AI tools. "If AI is utilized, don't keep us clinicians out of the human process of engaging with our patients in determining the right level of care," says Khalsa. As the technology improves to be more useful to mental health care providers, Torous thinks human providers will likely work hand-in-hand with AI assistants. "What we're probably moving towards is something called a hybrid or blended model of care," he says. Providers would still treat patients and provide therapy, while AI assistants or chatbots help patients do therapy homework, practice skills, and give providers "real-time feedback" on patients. Vaile Wright of the APA sees an ongoing role for flesh-and-blood therapists. "And that's in part because there are no AI digital solutions that can replace human-driven psychotherapy or care."
[5]
How Far Can Automation and AI Support Psychotherapy? | Newswise
Newswise -- Psychotherapy has always been a deeply human endeavor: a patient talking, a therapist listening and responding, and healing happening through words. But with the rapid rise of conversational artificial intelligence, particularly large language models (LLMs), that paradigm is shifting fast. A team of University of Utah researchers is tackling this change, but not by asking, "Will robots replace therapists?" Rather, they explore more practical questions: What are we automating and how much? "The history of new technology like this is almost always about collaboration, and it's about how it supports the human expert in doing the work they can do," said Zac Imel, a professor of educational psychology and lead author of a new study titled "A Framework for Automation in Psychotherapy." "It might be useful to think about frameworks for understanding the different types of work that could be done through automation, and that's what this paper is." The study is the result of a cross-campus collaboration among researchers from Utah's College of Engineering, School of Medicine and College of Education. Simply put, automation is when machines perform tasks humans have previously done. In therapy, that could range from a chatbot delivering prewritten coping tips to AI systems that take and organize notes, analyze therapy sessions and provide feedback to clinicians, or even talk directly to patients. Co-author Vivek Srikumar uses self-driving cars as an analogy for the varying levels of automation. "The automobile industry has been introducing driver assistance systems in our cars for many years now, and the extreme end is self-driving cars," said Srikumar, an associate professor at the Kahlert School of Computing. "This paper can be seen from that perspective. The extreme version of AI in psychotherapy is an AI therapist, but there are different levels of automation that might be associated with different amounts of risk. You might have different capabilities or assistance that is provided to therapists, to clients, to organizations by AI." Imel and Srikumar are long-time collaborators who teamed up with Brent Kious, an associate professor of psychiatry, to craft the automation framework, which was posted in advance of publication by Current Directions in Psychological Science. The team outlined four categories, representing different levels of automation along a continuum. The team evaluated each category for its potential utility and risk levels, which vary widely. A scripted chatbot, an AI coaching tool for therapists, and a fully autonomous AI therapist are fundamentally different technologies with different risks. However, it's often not clear to users, or even health systems, which technology they are using. "By cataloging the various levels of automation, the same question takes on different flavors at various levels, questions about risk, questions about consent, who gets to consent and how much consent and the impact of potential mistakes and the questions about who and how much responsibility is borne by various parties," Srikumar said. "All of these things, the questions remain the same, but the impact of these questions changes." The team is particularly interested in improving the way clinicians are evaluated and mentored to improve the level of care provided to patients. "We are currently partnering with SafeUT, Utah's statewide text-based crisis line, to develop tools that help evaluate crisis counselors' sessions so that they can get feedback to maintain key skills and even develop new ones as we learn more about crisis counseling," Kious said. Evaluation and training are where large language models can support therapists without coming close to replacing them, Imel said. Current methods are no match to the scale of need in mental health care. "To evaluate a psychotherapy session is tremendously labor-intensive. It's slow, it's unreliable, it rarely gets used," Imel said. "You're not recording your sessions and then mailing them off to an expert who can listen to them and evaluate them and give you feedback and then send it back to you so you can learn from it." Here, appropriately trained LLMs can quickly capture core components of treatment and provide that information back to therapists quickly-often in real time. The researchers note that anyone can now turn to ChatGPT for counseling that might resemble psychotherapy. LLMs are designed to be engaging and sound empathetic, and are trained on vast datasets, but they don't necessarily use evidence-based psychotherapy techniques. Accordingly, they carry huge risks since they are known to fabricate information, encode biases and respond unpredictably. "Why would one want to deploy the riskiest version of a tool when there are so many lighter versions of it that we can already deploy that are going to make life easier?" Srikumar said. "A note-taking application, for example, something that maintains notes across a session. These are already going to improve the quality of life for clinicians, the quality of service." The team also envisions a role for AI in crisis hotlines someday. "It's a really challenging environment where you don't know anything about the people you're talking to. They're calling in, you may only have five or six talk turns to connect with them. You have a very confined space to try and help this person and get them safe and reduce risk," Srikumar said. "What I do foresee is that future crisis counseling systems will be heavily augmented by AI because the scale is too big to be satisfied without automation." The study, titled "A Framework for Automation in Psychotherapy," appears in the April edition of Current Directions in Psychological Science. Lead author Zac Imel is a co-founder of Lyssn, a tech company in Seattle developing AI-based quality-improvement programs for behavioral health services. Co-authors include researchers with the University of Washington, University of Pennsylvania and the Alan Turing Institute.
Share
Copy Link
As AI chatbots for emotional support gain traction, researchers at University of Utah propose a four-level automation framework to guide AI integration in mental healthcare. Meanwhile, mental health care providers express concerns about job displacement, and the American Psychological Association urges therapists to discuss AI usage with patients. The debate centers on how artificial intelligence can support human therapists without replacing the deeply personal nature of psychotherapy.
Artificial intelligence is entering mental healthcare at a rapid pace, driven by a global crisis in access to psychological services. The World Health Organization estimates that depression alone affects more than 300 million people globally, yet patients often face months-long waits for therapy
1
. AI tools for mental health are expanding to fill this gap, from conversational therapy chatbots like Woebot and Wysa to predictive analytics that identify suicide risk1
. Digital therapeutics platforms such as reSET deliver structured cognitive behavioral therapy modules that adapt based on patient progress, while behavioral monitoring applications track sleep patterns and smartphone usage as potential mental health indicators1
.
Source: Medscape
Recognizing the need for clarity around AI automation in psychotherapy, researchers from the University of Utah developed a comprehensive framework published in Current Directions in Psychological Science. Led by educational psychology professor Zac Imel, the team outlined four distinct categories representing different levels of automation in therapy
2
. Category A involves scripted systems where chatbots deliver prewritten content through decision trees. Category B focuses on AI evaluating therapists by reviewing sessions and providing feedback. Category C describes AI assisting therapists with suggested interventions while human therapists deliver care. Category D represents fully autonomous AI providing therapy directly to patients2
. Associate professor Vivek Srikumar from the Kahlert School of Computing compared these levels to self-driving cars, noting that different automation levels carry vastly different risks and benefits5
.
Source: Newswise
The American Psychological Association now recommends that mental health care providers ask patients about their use of AI chatbots for emotional support, similar to how they inquire about sleep, diet, and exercise. "We're not saying that AI use is good or bad," explains Shaddy Saba, assistant professor at New York University's Silver School of Social Work, "just like we wouldn't say substance use is necessarily good or bad"
3
. Learning about patient interactions with ChatGPT or other AI systems can provide valuable insight into coping strategies, relationship challenges, and topics patients may avoid discussing directly3
. Former National Institute of Mental Health director Dr. Tom Insel notes that people often use chatbots to discuss things they cannot share with others due to fear of judgment, including suicidal thoughts3
. However, therapists should also inform patients about risks, particularly data privacy concerns, as many AI companies use conversations to train their models3
.The rapid adoption of AI in mental health has sparked anxiety among practitioners. In March 2026, 2,400 mental health care providers at Kaiser Permanente in Northern California and the Central Valley staged a 24-hour strike, partly protesting changes to the triage system
4
. Licensed clinical social worker Ilana Marcucci-Morris described how her triage role was reassigned in May 2025, with 10-to-15-minute screenings by licensed clinicians replaced by unlicensed operators following scripts or e-visits4
. At Kaiser Permanente's Walnut Creek facility, the triage team shrank from nine providers to three4
. Kaiser Permanente confirmed it is evaluating AI tools from U.K. company Limbic, though the technology is not currently in use, and emphasized that "our use of AI does not replace clinical expertise"4
.Related Stories
Despite fears, the American Psychological Association's Vaile Wright notes she has not seen AI replace mental health jobs yet. Instead, AI adoption focuses on improving efficiency around documentation and automated activities
4
. Nearly 40 different products now offer transcription and documentation support services, helping therapists with time-consuming tasks like billing insurance companies and updating electronic health records4
. The University of Utah team is partnering with SafeUT, Utah's statewide text-based crisis line, to develop tools that evaluate crisis counselors' sessions and provide feedback for skill maintenance and development2
. Imel emphasizes that evaluating psychotherapy sessions is "tremendously labor-intensive" and rarely used, but Large Language Models can quickly capture core treatment components and provide real-time feedback to therapists5
.As AI in mental health continues to expand, questions about patient safety, consent, and clinical responsibility intensify. The University of Utah researchers note that while anyone can now turn to ChatGPT for counseling resembling psychotherapy, Large Language Models carry significant risks since they fabricate information, encode biases, and respond unpredictably
5
. Srikumar questions why one would deploy the riskiest version of a tool when lighter versions like note-taking applications already make life easier5
. Companies like Limbic, deployed across 63% of the U.K.'s National Health Service and in 13 U.S. states, offer AI assistants for patient intake and support, including a chatbot trained on cognitive behavioral therapy that provides evidence-based tools at 3 a.m. when human therapists are unavailable4
. The challenge ahead lies in determining which tasks benefit from clinician support through AI and which require the irreplaceable human connection at the heart of psychotherapy.
Source: NPR
Summarized by
Navi
[1]
[2]
1
Entertainment and Society

2
Policy and Regulation

3
Technology
