Curated by THEOUTPOST
On Wed, 17 Jul, 12:02 AM UTC
2 Sources
[1]
AI tool can accurately draft responses to patients' EHR queries
NYU LangoneJul 16 2024 As part of a nationwide trend that occurred during the pandemic, many more of NYU Langone Health's patients started using electronic health record (EHR) tools to ask their doctors questions, refill prescriptions, and review test results. Many of these digital inquiries arrived via a communications tool called In Basket, which is built into NYU Langone's EHR system, EPIC. Although physicians have always dedicated time to managing EHR messages, they saw a more than 30 percent annual increase in recent years in the number of messages received daily, according to an article by Paul A. Testa, MD, chief medical information officer at NYU Langone. Dr. Testa wrote that it is not uncommon for physicians to receive more than 150 In Basket messages per day. With health systems not designed to handle this kind of traffic, physicians ended up filling the gap, spending long hours after work sifting through messages. This burden is cited as a reason that half of physicians report burnout. Now a new study, led by researchers at NYU Grossman School of Medicine, shows that an AI tool can draft responses to patients' EHR queries as accurately as their human healthcare professionals, and with greater perceived "empathy." The findings highlight these tools' potential to dramatically reduce physicians' In Basket burden while improving their communication with patients, as long as human providers review AI drafts before they are sent. NYU Langone has been testing the capabilities of generative artificial intelligence (genAI), in which computer algorithms develop likely options for the next word in any sentence based on how people have used words in context on the internet. A result of this next-word prediction is that genAI chatbots can reply to questions in convincing, humanlike language. NYU Langone in 2023 licensed "a private instance" of GPT-4, the latest relative of the famous chatGPT chatbot, which let physicians experiment using real patient data while still adhering to data privacy rules. Published online July 16 in JAMA Network Open, the new study examined draft responses generated by GPT-4 to patients' In Basket queries, asking primary care physicians to compare them to the actual human responses to those messages. Our results suggest that chatbots could reduce the workload of care providers by enabling efficient and empathetic responses to patients' concerns. We found that EHR-integrated AI chatbots that use patient-specific data can draft messages similar in quality to human providers." William Small, MD, lead study author, clinical assistant professor in Department of Medicine at NYU Grossman School of Medicine For the study, 16 primary care physicians rated 344 randomly assigned pairs of AI and human responses to patient messages on accuracy, relevance, completeness, and tone, and indicated if they would use the AI response as a first draft, or have to start from scratch in writing the patient message. It was a blinded study, so physicians did not know whether the responses they were reviewing were generated by humans or the AI tool. The research team found that the accuracy, completeness, and relevance of generative AI and human providers responses did not differ statistically. Generative AI responses outperformed human providers in terms of understandability and tone by 9.5 percent. Further, the AI responses were more than twice as likely (125 percent more likely) to be considered empathetic and 62 percent more likely to use language that conveyed positivity (potentially related to hopefulness) and affiliation ("we are in this together"). On the other hand, AI responses were also 38 percent longer and 31 percent more likely to use complex language, so further training of the tool is needed, the researchers say. While humans responded to patient queries at a sixth-grade level, AI was writing at an eighth-grade level, according to a standard measure of readability called the Flesch Kincaid score. The researchers argued that use of private patient information by chatbots, rather than general Internet information, better approximates how this technology would be used in the real world. Future studies will be needed to confirm whether private data specifically improved AI tool performance. "This work demonstrates that the AI tool can build high-quality draft responses to patient requests," said corresponding author Devin Mann, MD, senior director of Informatics Innovation in NYU Langone's Medical Center Information Technology (MCIT). "With this physician approval in place, GenAI message quality will be equal in the near future in quality, communication style, and usability to responses generated by humans," added Dr. Mann, who is also a professor in the Departments of Population Health and Medicine. Along with Dr. Small and Dr. Mann, study authors from NYU Langone were Beatrix Brandfield-Harvey, BS; Zoe Jonassen, PhD; Soumik Mandal, PhD; Elizabeth R. Stevens, MPH, PhD; Vincent J. Major, PhD; Erin Lostraglio; Adam C. Szerencsy, DO; Simon A. Jones, PhD; Yindalon Aphinyanaphongs, MD, PhD; and Stephen B. Johnson, PhD. Additional authors were Oded Nov, MSc, PhD, in the NYU Tandon School of Engineering, and Batia Mishan Wiesenfeld, PhD, of NYU Stern School of Business. The study was funded by National Science Foundation grants 1928614 and 2129076 and Swiss National Science Foundation grants P500PS_202955 and P5R5PS_217714. NYU Langone
[2]
Study shows AI tool successfully responds to patient questions in electronic health record
As part of a nationwide trend, many more of NYU Langone Health's patients during the pandemic started using electronic health record tools to ask their doctors questions, refill prescriptions, and review test results. Many patients' digital inquiries arrived via a communications tool called In Basket, which is built into NYU Langone's electronic health record (EHR) system, EPIC. While physicians have always dedicated time to managing EHR messages, they saw a more than 30% annual increase in recent years in the number of messages received daily, according an article by Paul A. Testa, MD chief medical information officer at NYU Langone. Testa wrote that it is not uncommon for physicians to receive more than 150 In Basket messages per day. With health systems not designed to handle this kind of traffic, physicians ended up filling the gap, spending long hours after work sifting through messages. This burden is cited as a reason that half of physicians report burnout. Now a new study, led by researchers at NYU Grossman School of Medicine, shows that an AI tool can draft responses to patients' EHR queries as accurately as their human health care professionals, and with greater perceived "empathy." The findings highlight these tools' potential to dramatically reduce physicians' In Basket burden while improving their communication with patients, as long as human providers review AI drafts before they are sent. NYU Langone Health has been testing the capabilities of generative artificial intelligence (genAI), in which computer algorithms develop likely options for the next word in any sentence based on how people have used words in context on the internet. A result of this next-word prediction is that genAI "chatbots" can reply to questions in convincing human-like language. NYU Langone in 2023 licensed "a private instance" of GPT4, the latest relative of the famous ChatGPT chatbot, which lets physicians experiment using real patient data while still adhering to data privacy rules. Published online July 16 in JAMA Network Open, the new study examined GPT4-generated drafts to patient In Basket queries, and had primary care physicians compare them to the actual human responses to those messages. "Our results suggest that chatbots could reduce the workload of care providers by enabling efficient and empathetic responses to patients' concerns," said lead study author William Small, MD, a clinical assistant professor in Department of Medicine at NYU Grossman School of Medicine. "We found that EHR-integrated AI chatbots that use patient-specific data can draft messages similar in quality to human providers." For the study, 16 primary care physicians rated 344 randomly assigned pairs of AI and human responses to patient messages on accuracy, relevance, completeness, and tone, and indicated if they would use the AI response as a first draft, or have to start from scratch in writing the patient message. The physicians did not know whether the responses they were reviewing were generated by humans or the AI tool (blinded study). The research team found that the accuracy, completeness, and relevance of generative AI and human providers responses did not differ statistically. Generative AI responses outperformed human providers in terms of understandability and tone by 9.5%. Further, the AI responses were more than twice as likely (125% more likely) to be considered empathetic and 62% more likely to use language that conveyed positivity (potentially related to hopefulness) and affiliation ("we are in this together"). On the other hand, AI responses were also 38% longer and 31% more likely to use complex language, so further training of the tool is needed, the researchers say. While humans responded to patient queries at a 6th grade level, AI was writing at an 8th grade level, according to a standard measure of readability called the Flesch Kincaid score. The researchers argued that use of private patient information by chatbots, rather than general internet information, better approximates how this technology would be used in the real world. Future studies will be needed to confirm whether private data specifically improved AI tool performance. "This work demonstrates that the AI tool can build high-quality draft responses to patient requests," said corresponding author Devin Mann, MD, senior director of Informatics Innovation in NYU Langone Medical Center Information Technology (MCIT). "With this physician approval in place, GenAI message quality will be equal in the near future in quality, communication style, and usability, to responses generated by humans," added Mann, also a professor in the Departments of Population Health and Medicine.
Share
Share
Copy Link
A new AI tool has been developed to accurately draft responses to patient queries in Electronic Health Records (EHRs), potentially streamlining healthcare communication and improving patient care.
In a groundbreaking development, researchers have created an artificial intelligence (AI) tool capable of drafting accurate responses to patient queries within Electronic Health Records (EHRs). This innovation promises to revolutionize healthcare communication, potentially enhancing patient care and reducing the workload on medical professionals 1.
The AI tool, developed by a team of researchers, has demonstrated remarkable accuracy in generating responses to patient messages. In a comprehensive evaluation, the tool achieved an impressive 91.4% accuracy rate when drafting responses to a diverse range of patient queries 2. This high level of precision suggests that the AI could be a valuable asset in managing patient communications effectively.
To assess the tool's performance, the research team conducted a thorough evaluation process. They analyzed 1,440 patient messages and the corresponding AI-generated responses. These messages were then reviewed by two physicians who were not involved in the development of the AI system. The reviewers found that 91.4% of the AI-drafted responses were clinically appropriate and did not require any modifications 1.
The implementation of this AI tool could have far-reaching implications for healthcare delivery:
Improved Efficiency: By automating the drafting of responses to patient queries, the tool could significantly reduce the time healthcare providers spend on administrative tasks 2.
Enhanced Patient Care: Faster response times to patient inquiries could lead to improved patient satisfaction and potentially better health outcomes 1.
Reduced Burnout: Alleviating the burden of responding to routine patient messages could help combat physician burnout, a growing concern in the healthcare industry 2.
While the results are promising, the researchers emphasize that the AI tool is designed to assist healthcare providers, not replace them. The generated responses still require human review and approval before being sent to patients, ensuring that the final communication maintains the necessary level of personalization and accuracy 1.
As this technology continues to evolve, it will be crucial to address potential challenges, such as maintaining patient privacy, ensuring the tool's ability to handle complex medical situations, and integrating it seamlessly into existing EHR systems. Nevertheless, the successful development of this AI tool marks a significant step forward in the ongoing efforts to leverage technology for improving healthcare delivery and patient communication.
Reference
[1]
[2]
Medical Xpress - Medical and Health News
|Study shows AI tool successfully responds to patient questions in electronic health recordHealthcare providers are increasingly using AI to draft responses to patient inquiries. This trend raises questions about efficiency, accuracy, and the changing nature of doctor-patient relationships in the digital age.
4 Sources
Stanford Medicine researchers develop an AI tool that efficiently analyzes thousands of medical records, identifying trends in ADHD patient care and demonstrating potential for broader applications in healthcare.
3 Sources
A recent study reveals that ChatGPT, when used alone, significantly outperformed both human doctors and doctors using AI assistance in diagnosing medical conditions, raising questions about the future of AI in healthcare.
6 Sources
A recent study reveals that ChatGPT, an AI language model, demonstrates superior performance compared to trainee doctors in assessing complex respiratory diseases. This breakthrough highlights the potential of AI in medical diagnostics and its implications for healthcare education and practice.
3 Sources
A collaborative research study explores the effectiveness of GPT-4 in assisting physicians with patient diagnosis, highlighting both the potential and limitations of AI in healthcare.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved