2 Sources
[1]
New large language model helps patients understand their radiology reports
"RadGPT" cuts through medical jargon to answer common patient questions. Imagine getting an MRI of your knees and being told you have "mild intrasubstance degeneration of the posterior horn of the medial meniscus." Chances are, most of us who didn't go to medical school are not going to be able to decipher that jargon as anything meaningful or understand what is actionable from that diagnosis. That's why Stanford radiologists developed a large language model to help address patients' medical concerns and questions about X-rays, CTs, MRIs, ultrasounds, PET scans, and angiograms. Using this model, a patient getting a knee MRI could get a more useful and simple explanation: Your knee's meniscus is a tissue in your knee that serves as a cushion, and, like a pillow, the meniscus has gone a little flat but still can function. This LLM - dubbed "RadGPT" - can extract concepts from a radiologist's report to then provide an explanation of that concept and suggest possible follow-up questions. The research was published this month in the Journal of the American College of Radiology. Traditionally, medical expertise is needed to understand the technical reports radiologists write about patient scans, said Curtis Langlotz, Stanford professor of radiology, of medicine, and of biomedical data science, senior fellow at the Stanford Institute for Human-Centered AI (HAI), and senior author of the study. "We hope that our technology won't just help to explain the results, but will also help to improve the communication between doctor and patient." Since 2021, under the 21st Century Cures Act, patients in the United States have had federal protection to get electronic access to their own radiology reports. But tools like RadGPT could get patients more engaged in their care, Langlotz believes, because they can better understand what their test results actually mean. "Doctors don't always have the time to go through and explain reports, line by line," Langlotz said. "I think patients who really do understand what's in their medical record are going to get better care and will ask better questions." To develop RadGPT, the Stanford team took 30 sample radiology reports and extracted five concepts from each report. With those 150 concepts, they developed explanations for them and three question-and-answer pairs that patients might commonly ask. Five radiologists who reviewed these explanations determined that the system is unlikely to produce hallucinations or other harmful explanations. AI is still a ways away from being able to accurately interpret raw scans. Instead, the current RadGPT model depends on a human radiologist dictating a report, and only then will the system extract concepts from what they have written. "As with any other healthcare technology, safety is absolutely paramount," said Sanna Herwald, the study's lead author and a Stanford resident in graduate medical education. "The reason this study is so exciting is because the RadGPT-generated materials were generally deemed safe without further modification. This means that RadGPT is a promising tool that may, after further testing and validation, directly educate patients about their urgent or incidental imaging findings in real time at the patient's convenience." While this LLM still has to be tested in a clinical setting, Langlotz believes the LLMs that are the underpinnings of this technology will not only benefit patients in getting answers to common medical questions but also radiologists, who can either be more productive or be able to take breaks to reduce burnout. "If you look at self-reports of cognitive load - the amount of work your brain is doing throughout a day - radiology is right at the top of that list."
[2]
New large language model helps patients understand their radiology reports
Imagine getting an MRI of your knees and being told you have "mild intrasubstance degeneration of the posterior horn of the medial meniscus." Chances are, most of us who didn't go to medical school are not going to be able to decipher that jargon as anything meaningful or understand what is actionable from that diagnosis. That's why Stanford radiologists developed a large language model to help address patients' medical concerns and questions about X-rays, CTs, MRIs, ultrasounds, PET scans, and angiograms. Using this model, a patient getting a knee MRI could get a more useful and simple explanation: Your knee's meniscus is a tissue in your knee that serves as a cushion, and, like a pillow, the meniscus has gone a little flat but still can function. This LLM -- dubbed "RadGPT" -- can extract concepts from a radiologist's report to then provide an explanation of that concept and suggest possible follow-up questions. The research was published this month in the Journal of the American College of Radiology. Traditionally, medical expertise is needed to understand the technical reports radiologists write about patient scans, said Curtis Langlotz, Stanford professor of radiology, of medicine, and of biomedical data science, senior fellow at the Stanford Institute for Human-Centered AI (HAI), and senior author of the study. "We hope that our technology won't just help to explain the results, but will also help to improve the communication between doctor and patient." Since 2021, under the 21st Century Cures Act, patients in the United States have had federal protection to get electronic access to their own radiology reports. But tools like RadGPT could get patients more engaged in their care, Langlotz believes, because they can better understand what their test results actually mean. "Doctors don't always have the time to go through and explain reports, line by line," Langlotz said. "I think patients who really do understand what's in their medical record are going to get better care and will ask better questions." To develop RadGPT, the Stanford team took 30 sample radiology reports and extracted five concepts from each report. With those 150 concepts, they developed explanations for them and three question-and-answer pairs that patients might commonly ask. Five radiologists who reviewed these explanations determined that the system is unlikely to produce hallucinations or other harmful explanations. AI is still a ways away from being able to accurately interpret raw scans. Instead, the current RadGPT model depends on a human radiologist dictating a report, and only then will the system extract concepts from what they have written. "As with any other health care technology, safety is absolutely paramount," said Sanna Herwald, the study's lead author and a Stanford resident in graduate medical education. "The reason this study is so exciting is because the RadGPT-generated materials were generally deemed safe without further modification. This means that RadGPT is a promising tool that may, after further testing and validation, directly educate patients about their urgent or incidental imaging findings in real time at the patient's convenience." While this LLM still has to be tested in a clinical setting, Langlotz believes the LLMs that are the underpinnings of this technology will not only benefit patients in getting answers to common medical questions, but also radiologists, who can either be more productive or be able to take breaks to reduce burnout. "If you look at self-reports of cognitive load -- the amount of work your brain is doing throughout a day -- radiology is right at the top of that list."
Share
Copy Link
Stanford researchers have developed RadGPT, a large language model that translates complex radiology reports into easy-to-understand explanations for patients, potentially improving doctor-patient communication and patient engagement in healthcare.
Stanford University researchers have introduced a groundbreaking large language model named RadGPT, designed to bridge the gap between complex medical jargon and patient understanding in radiology reports. This innovative AI tool aims to enhance doctor-patient communication and empower patients to better comprehend their medical test results 12.
Radiology reports often contain technical terms that are difficult for patients to decipher. For instance, a diagnosis of "mild intrasubstance degeneration of the posterior horn of the medial meniscus" in a knee MRI report can be confusing for those without medical training. RadGPT addresses this issue by providing simpler explanations, such as comparing the knee's meniscus to a cushion that has gone slightly flat but remains functional 12.
Source: Medical Xpress
The AI model extracts key concepts from radiologists' reports and generates easy-to-understand explanations along with potential follow-up questions. This process helps patients grasp the meaning of their test results and encourages more informed discussions with their healthcare providers 12.
To create RadGPT, the Stanford team analyzed 30 sample radiology reports, extracting 150 concepts and developing explanations and question-answer pairs for each. The system's safety was evaluated by five radiologists, who determined that RadGPT is unlikely to produce harmful or inaccurate explanations 12.
Curtis Langlotz, a Stanford professor and senior author of the study, believes that tools like RadGPT could significantly improve patient engagement in their care. With the 21st Century Cures Act granting patients electronic access to their radiology reports since 2021, RadGPT could play a crucial role in helping patients understand and act upon their medical information 12.
While RadGPT shows promise, it still relies on human radiologists to generate initial reports. The AI cannot yet interpret raw scans independently. However, researchers are optimistic about its potential to enhance patient education and reduce the cognitive load on radiologists 12.
Source: Stanford News
Before widespread implementation, RadGPT will undergo further testing in clinical settings. Sanna Herwald, the study's lead author, emphasizes the importance of safety in healthcare technology and expresses excitement about RadGPT's potential to educate patients about their imaging findings in real-time 12.
As AI continues to evolve in the medical field, tools like RadGPT represent a significant step towards more accessible and understandable healthcare information, potentially leading to better patient outcomes and more efficient medical practices.
Summarized by
Navi
Netflix has incorporated generative AI technology in its original series "El Eternauta," marking a significant shift in content production methods for the streaming giant.
23 Sources
Technology
14 hrs ago
23 Sources
Technology
14 hrs ago
Meta declines to sign the European Union's voluntary AI code of practice, calling it an overreach that could stifle innovation and economic growth in Europe. The decision highlights growing tensions between tech giants and EU regulators over AI governance.
13 Sources
Policy and Regulation
13 hrs ago
13 Sources
Policy and Regulation
13 hrs ago
An advisory board convened by OpenAI recommends that the company should continue to be controlled by a nonprofit, emphasizing the need for democratic participation in AI development and governance.
6 Sources
Policy and Regulation
14 hrs ago
6 Sources
Policy and Regulation
14 hrs ago
Perplexity AI partners with Airtel to offer free Pro subscriptions, leading to a significant increase in downloads and user base in India, potentially reshaping the AI search landscape.
5 Sources
Technology
14 hrs ago
5 Sources
Technology
14 hrs ago
Perplexity AI, an AI-powered search engine startup, has raised $100 million in a new funding round, valuing the company at $18 billion. This development highlights the growing investor interest in AI startups and Perplexity's potential to challenge Google's dominance in internet search.
4 Sources
Startups
14 hrs ago
4 Sources
Startups
14 hrs ago