Curated by THEOUTPOST
On Sat, 8 Feb, 8:02 AM UTC
3 Sources
[1]
Physician's medical decisions benefit from chatbot, study suggests
Artificial intelligence-powered chatbots are getting pretty good at diagnosing some diseases, even when they are complex. But how do chatbots do when guiding treatment and care after the diagnosis? For example, how long before surgery should a patient stop taking prescribed blood thinners? Should a patient's treatment protocol change if they've had adverse reactions to similar drugs in the past? These sorts of questions don't have a textbook right or wrong answer -- it's up to physicians to use their judgment. Jonathan H. Chen, MD, PhD, assistant professor of medicine, and a team of researchers are exploring whether chatbots, a type of large language model, or LLM, can effectively answer such nuanced questions, and whether physicians supported by chatbots perform better. The answers, it turns out, are yes and yes. The research team tested how a chatbot performed when faced with a variety of clinical crossroads. A chatbot on its own outperformed doctors who could access only an internet search and medical references, but armed with their own LLM, the doctors, from multiple regions and institutions across the United States, kept up with the chatbots. "For years I've said that, when combined, human plus computer is going to do better than either one by itself," Chen said. "I think this study challenges us to think about that more critically and ask ourselves, 'What is a computer good at? What is a human good at?' We may need to rethink where we use and combine those skills and for which tasks we recruit AI." A study detailing these results published in Nature Medicine on Feb. 5. Chen and Adam Rodman, MD, assistant professor at Harvard University, are co-senior authors. Postdoctoral scholars Ethan Goh, MD, and Robert Gallo, MD, are co-lead author. Boosted by chatbots In October 2024, Chen and Goh led a team that ran a study, published in JAMA Network Open, that tested how the chatbot performed when diagnosing diseases and that found its accuracy was higher than that of doctors, even if they were using a chatbot. The current paper digs into the squishier side of medicine, evaluating chatbot and physician performance on questions that fall into a category called "clinical management reasoning." Goh explains the difference like this: Imagine you're using a map app on your phone to guide you to a certain destination. Using an LLM to diagnose a disease is sort of like using the map to pinpoint the correct location. How you get there is the management reasoning part -- do you take backroads because there's traffic? Stay the course, bumper to bumper? Or wait and hope the roads clear up? In a medical context, these decisions can get tricky. Say a doctor incidentally discovers a hospitalized patient has a sizeable mass in the upper part of the lung. What would the next steps be? The doctor (or chatbot) should recognize that a large nodule in the upper lobe of the lung statistically has a high chance of spreading throughout the body. The doctor could immediately take a biopsy of the mass, schedule the procedure for a later date or order imaging to try to learn more. Determining which approach is best suited for the patient comes down to a host of details, starting with the patient's known preferences. Are they reticent to undergo an invasive procedure? Does the patient's history show a lack of following up on appointments? Is the hospital's health system reliable when organizing follow-up appointments? What about referrals? These types of contextual factors are crucial to consider, Chen said. The team designed a trial to study clinical management reasoning performance in three groups: the chatbot alone, 46 doctors with chatbot support, and 46 doctors with access only to internet search and medical references. They selected five de-identified patient cases and gave them to the chatbot and to the doctors, all of whom provided a written response that detailed what they would do in each case, why and what they considered when making the decision. In addition, the researchers tapped a group of board-certified doctors to create a rubric that would qualify a medical judgment or decision as appropriately assessed. The decisions were then scored against the rubric. To the team's surprise, the chatbot outperformed the doctors who had access only to the internet and medical references, ticking more items on the rubric than the doctors did. But the doctors who were paired with a chatbot performed as well as the chatbot alone. A future of chatbot doctors? Exactly what gave the physician-chatbot collaboration a boost is up for debate. Does using the LLM force doctors to be more thoughtful about the case? Or is the LLM providing guidance that the doctors wouldn't have thought of on their own? It's a future direction of exploration, Chen said. The positive outcomes for chatbots and physicians paired with chatbots beg an ever-popular question: Are AI doctors on their way? "Perhaps it's a point in AI's favor," Chen said. But rather than replacing physicians, the results suggest that doctors might want to welcome a chatbot assist. "This doesn't mean patients should skip the doctor and go straight to chatbots. Don't do that," he said. "There's a lot of good information out there, but there's also bad information. The skill we all have to develop is discerning what's credible and what's not right. That's more important now than ever." Researchers from VA Palo Alto Health Care System, Beth Israel Deaconess Medical Center, Harvard University, University of Minnesota, University of Virginia, Microsoft and Kaiser contributed to this work. The study was funded by the Gordon and Betty Moore Foundation, the Stanford Clinical Excellence Research Center and the VA Advanced Fellowship in Medical Informatics. Stanford's Department of Medicine also supported the work.
[2]
Study suggests physicians make better decisions with help of AI chatbots
Artificial intelligence-powered chatbots are getting pretty good at diagnosing some diseases, but how do chatbots do when the questions are less black-and-white? For example, how long before surgery should a patient stop taking prescribed blood thinners? Should a patient's treatment protocol change if they've had adverse reactions to similar drugs in the past? These sorts of questions don't have a textbook right or wrong answer - it's up to physicians to use their judgment. Jonathan H. Chen, MD, PhD, assistant professor of medicine, and a team of researchers are exploring whether chatbots, a type of large language model, or LLM, can effectively answer such nuanced questions, and whether physicians supported by chatbots perform better. The answers, it turns out, are yes and yes. The research team tested how a chatbot performed when faced with a variety of clinical crossroads. A chatbot on its own outperformed doctors who could access only an internet search and medical references, but armed with their own LLM, the doctors, from multiple regions and institutions across the United States, kept up with the chatbots. "For years I've said that, when combined, human plus computer is going to do better than either one by itself," Chen said. "I think this study challenges us to think about that more critically and ask ourselves, 'What is a computer good at? What is a human good at?' We may need to rethink where we use and combine those skills and for which tasks we recruit AI." A study detailing these results published in Nature Medicine on Feb. 5. Chen and Adam Rodman, MD, assistant professor at Harvard University, are co-senior authors. Postdoctoral scholars Ethan Goh, MD, and Robert Gallo, MD, are co-lead author. In October 2024, the team ran a study, published in JAMA Network Open, that tested how the chatbot performed when diagnosing diseases and found that its accuracy was higher than that of doctors, even if they were using a chatbot. The current paper digs into the squishier side of medicine, evaluating chatbot and physician performance on questions that fall into a category called "clinical management reasoning." Goh explains the difference like this: Imagine you're using a map app on your phone to guide you to a certain destination. Using an LLM to diagnose a disease is sort of like using the map to pinpoint the correct location. How you get there is the management reasoning part - do you take backroads because there's traffic? Stay the course, bumper to bumper? Or wait and hope the roads clear up? In a medical context, these decisions can get tricky. Say a doctor incidentally discovers a hospitalized patient has a sizeable mass in the upper part of the lung. What would the next steps be? The doctor (or chatbot) should recognize that a large nodule in the upper lobe of the lung statistically has a high chance of spreading throughout the body. The doctor could immediately take a biopsy of the mass, schedule the procedure for a later date, or order imaging to try to learn more. Determining which approach is best suited for the patient comes down to a host of details, starting with the patient's known preferences. Are they reticent to undergo an invasive procedure? Does the patient's history show a lack of following up on appointments? Is the hospital's health system reliable when organizing follow-up appointments? What about referrals? These types of contextual factors are crucial to consider, Chen said. The team designed a trial to study clinical management reasoning performance in three groups: the chatbot alone, 46 doctors with chatbot support, and 46 doctors with access only to internet search and medical references. They selected five de-identified patient cases and gave them to the chatbot and to the doctors, all of whom provided a written response that detailed what they would do in each case, why, and what they considered when making the decision. In addition, the researchers tapped a group of board-certified doctors to create a rubric that would qualify a medical judgment or decision as appropriately assessed. The decisions were then scored against the rubric. To the team's surprise, the chatbot outperformed the doctors who had access only to the internet and medical references, ticking more items on the rubric than the doctors did. But the doctors who were paired with a chatbot performed as well as the chatbot alone. Exactly what gave the physician-chatbot collaboration a boost is up for debate. Does using the LLM force doctors to be more thoughtful about the case? Or is the LLM providing guidance that the doctors wouldn't have thought of on their own? It's a future direction of exploration, Chen said. The positive outcomes for chatbots and physicians paired with chatbots beg an ever-popular question: Are AI doctors on their way? "Perhaps it's a point in AI's favor," Chen said. But rather than replacing physicians, the results suggest that doctors might want to welcome a chatbot assist. "This doesn't mean patients should skip the doctor and go straight to chatbots. Don't do that," he said. "There's a lot of good information out there, but there's also bad information. The skill we all have to develop is discerning what's credible and what's not right. That's more important now than ever."
[3]
Better Together: Physicians Using AI Showed Improved Performance in Randomized Controlled Trial | Newswise
Boston, MA -- It didn't take long for artificial intelligence (AI) to outperform human physicians in diagnostic reasoning -- the first, if critical, step in clinical reasoning and patient care. Now, a study published in Nature Medicine suggests physicians who have access to large language models (LLM), also known as chatbots, demonstrate improved performance on several patient care tasks compared to physicians without access to LLM. "Early implementation of AI into healthcare has largely been directed at clerical clinical workflows, such as portal messaging," said Adam Rodman, MD, MPH, Director of AI Programs at Beth Israel Deaconess Medical Center (BIDMC). "But one of the theoretical strengths of chatbots is their ability to serve as a cooperation partner, augmenting human cognition. Our findings demonstrate that improving physician performance, even in a task as complex as open-ended decision-making, represents a promising application. However, this will require rigorous validation to realize LLMs' potential for enhancing patient care." Rodman and colleagues assessed 92 practicing physicians' decision-making processes as they worked through five hypothetical patient cases, each based on real, de-identified patient encounters. The researchers focused on the physicians' management reasoning, a step in clinical reasoning that encompasses decision-making around testing and treatment, balanced against patient preferences, social factors, costs, and risk. "Unlike diagnostic reasoning, a task often with a single right answer which LLMs excel at, management reasoning may have no right answer and involves weighing trade-offs between inherently risky courses of action," said Rodman. When their responses to their hypothetical patient cases were scored, physicians using the chat bot scored significantly higher than those using conventional resources only. Chatbot users also spent more time per case -- by nearly two minutes. Additionally, physicians who used LLMs provided responses that carried a lower likelihood of mild-to-moderate harm; potential for mild-to-moderate harm was observed in 3.7 percent of LLM-assisted responses compared to 5.3 percent in the conventional resources group. However, potential for severe harm ratings were nearly identical between physician groups. "The availability of an LLM improved physicians' management reasoning compared to conventional resources only, with comparable scores between physicians randomized to use AI and AI by itself. This suggests a future use for LLM's as a helpful adjunct to clinical judgment," said Rodman. "Further exploration into whether the LLM is merely encouraging users to slow down and reflect more deeply, or whether it is actively augmenting the reasoning process would be valuable." Co-authors included Hannah Kerman, Jason A. Freed, Josephine A. Cool and Zahir Kanjee of Beth Israel Deaconess Medical Center; Ethan Goh, Eric Strong, Yingjie Weng, Neera Ahuja, Arnold Millstein. Jason Hom and Jonathan H. Chen of Stanford University; Robert Gallo of VA Palo Alto Health Care System; Kathleen P. Lane and Andrew P.J. Olsen of University of Minnesota Medical School; Andrew S. Parsons of School of Medicine, University of Virginia; Eric Horvitz of Microsoft; and Daniel Yang of Kaiser Permanente. Rodman, Cool and Kanjee disclose funding from the Gordon and Betty Moore Foundation. Please see the publication for a complete list of disclosures and funders. About Beth Israel Deaconess Medical Center BIDMC is a part of Beth Israel Lahey Health, a healthcare system that brings together academic medical centers and teaching hospitals, community and specialty hospitals, more than 4,700 physicians and 39,000 employees in a shared mission to expand access to great care and advance the science and practice of medicine through groundbreaking research and education.
Share
Share
Copy Link
A new study reveals that AI-powered chatbots can improve physicians' clinical management reasoning, outperforming doctors using conventional resources and matching the performance of standalone AI in complex medical decision-making scenarios.
A groundbreaking study published in Nature Medicine on February 5, 2025, has revealed that artificial intelligence-powered chatbots can significantly improve physicians' clinical management reasoning skills. The research, led by Dr. Jonathan H. Chen from Stanford University and Dr. Adam Rodman from Harvard University, demonstrates that AI chatbots not only outperform doctors using conventional resources but also match the performance of standalone AI in complex medical decision-making scenarios 1.
The research team designed a trial to evaluate clinical management reasoning performance across three groups:
Participants were presented with five de-identified patient cases and asked to provide detailed written responses outlining their decision-making process. A rubric created by board-certified doctors was used to assess the appropriateness of medical judgments and decisions 2.
The study yielded several significant results:
The research indicates that AI-assisted decision-making could lead to improved patient outcomes:
While the study demonstrates the potential of AI in enhancing clinical decision-making, several questions remain:
Dr. Chen emphasizes that these results do not suggest replacing physicians with AI but rather highlight the potential for AI to augment human decision-making in complex medical scenarios 1.
As AI continues to evolve, the medical community must remain vigilant in evaluating its impact on patient care and developing guidelines for its ethical and effective use in clinical practice.
A recent study reveals that ChatGPT, when used alone, significantly outperformed both human doctors and doctors using AI assistance in diagnosing medical conditions, raising questions about the future of AI in healthcare.
6 Sources
6 Sources
A new study reveals that while AI models perform well on standardized medical tests, they face significant challenges in simulating real-world doctor-patient conversations, raising concerns about their readiness for clinical deployment.
3 Sources
3 Sources
A new AI tool has been developed to accurately draft responses to patient queries in Electronic Health Records (EHRs), potentially streamlining healthcare communication and improving patient care.
2 Sources
2 Sources
Recent studies highlight the potential of artificial intelligence in medical settings, demonstrating improved diagnostic accuracy and decision-making. However, researchers caution about the need for careful implementation and human oversight.
2 Sources
2 Sources
A recent study reveals that ChatGPT, an AI language model, demonstrates superior performance compared to trainee doctors in assessing complex respiratory diseases. This breakthrough highlights the potential of AI in medical diagnostics and its implications for healthcare education and practice.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved