2 Sources
2 Sources
[1]
New large language model helps patients understand their radiology reports
"RadGPT" cuts through medical jargon to answer common patient questions. Imagine getting an MRI of your knees and being told you have "mild intrasubstance degeneration of the posterior horn of the medial meniscus." Chances are, most of us who didn't go to medical school are not going to be able to decipher that jargon as anything meaningful or understand what is actionable from that diagnosis. That's why Stanford radiologists developed a large language model to help address patients' medical concerns and questions about X-rays, CTs, MRIs, ultrasounds, PET scans, and angiograms. Using this model, a patient getting a knee MRI could get a more useful and simple explanation: Your knee's meniscus is a tissue in your knee that serves as a cushion, and, like a pillow, the meniscus has gone a little flat but still can function. This LLM - dubbed "RadGPT" - can extract concepts from a radiologist's report to then provide an explanation of that concept and suggest possible follow-up questions. The research was published this month in the Journal of the American College of Radiology. Traditionally, medical expertise is needed to understand the technical reports radiologists write about patient scans, said Curtis Langlotz, Stanford professor of radiology, of medicine, and of biomedical data science, senior fellow at the Stanford Institute for Human-Centered AI (HAI), and senior author of the study. "We hope that our technology won't just help to explain the results, but will also help to improve the communication between doctor and patient." Since 2021, under the 21st Century Cures Act, patients in the United States have had federal protection to get electronic access to their own radiology reports. But tools like RadGPT could get patients more engaged in their care, Langlotz believes, because they can better understand what their test results actually mean. "Doctors don't always have the time to go through and explain reports, line by line," Langlotz said. "I think patients who really do understand what's in their medical record are going to get better care and will ask better questions." To develop RadGPT, the Stanford team took 30 sample radiology reports and extracted five concepts from each report. With those 150 concepts, they developed explanations for them and three question-and-answer pairs that patients might commonly ask. Five radiologists who reviewed these explanations determined that the system is unlikely to produce hallucinations or other harmful explanations. AI is still a ways away from being able to accurately interpret raw scans. Instead, the current RadGPT model depends on a human radiologist dictating a report, and only then will the system extract concepts from what they have written. "As with any other healthcare technology, safety is absolutely paramount," said Sanna Herwald, the study's lead author and a Stanford resident in graduate medical education. "The reason this study is so exciting is because the RadGPT-generated materials were generally deemed safe without further modification. This means that RadGPT is a promising tool that may, after further testing and validation, directly educate patients about their urgent or incidental imaging findings in real time at the patient's convenience." While this LLM still has to be tested in a clinical setting, Langlotz believes the LLMs that are the underpinnings of this technology will not only benefit patients in getting answers to common medical questions but also radiologists, who can either be more productive or be able to take breaks to reduce burnout. "If you look at self-reports of cognitive load - the amount of work your brain is doing throughout a day - radiology is right at the top of that list."
[2]
New large language model helps patients understand their radiology reports
Imagine getting an MRI of your knees and being told you have "mild intrasubstance degeneration of the posterior horn of the medial meniscus." Chances are, most of us who didn't go to medical school are not going to be able to decipher that jargon as anything meaningful or understand what is actionable from that diagnosis. That's why Stanford radiologists developed a large language model to help address patients' medical concerns and questions about X-rays, CTs, MRIs, ultrasounds, PET scans, and angiograms. Using this model, a patient getting a knee MRI could get a more useful and simple explanation: Your knee's meniscus is a tissue in your knee that serves as a cushion, and, like a pillow, the meniscus has gone a little flat but still can function. This LLM -- dubbed "RadGPT" -- can extract concepts from a radiologist's report to then provide an explanation of that concept and suggest possible follow-up questions. The research was published this month in the Journal of the American College of Radiology. Traditionally, medical expertise is needed to understand the technical reports radiologists write about patient scans, said Curtis Langlotz, Stanford professor of radiology, of medicine, and of biomedical data science, senior fellow at the Stanford Institute for Human-Centered AI (HAI), and senior author of the study. "We hope that our technology won't just help to explain the results, but will also help to improve the communication between doctor and patient." Since 2021, under the 21st Century Cures Act, patients in the United States have had federal protection to get electronic access to their own radiology reports. But tools like RadGPT could get patients more engaged in their care, Langlotz believes, because they can better understand what their test results actually mean. "Doctors don't always have the time to go through and explain reports, line by line," Langlotz said. "I think patients who really do understand what's in their medical record are going to get better care and will ask better questions." To develop RadGPT, the Stanford team took 30 sample radiology reports and extracted five concepts from each report. With those 150 concepts, they developed explanations for them and three question-and-answer pairs that patients might commonly ask. Five radiologists who reviewed these explanations determined that the system is unlikely to produce hallucinations or other harmful explanations. AI is still a ways away from being able to accurately interpret raw scans. Instead, the current RadGPT model depends on a human radiologist dictating a report, and only then will the system extract concepts from what they have written. "As with any other health care technology, safety is absolutely paramount," said Sanna Herwald, the study's lead author and a Stanford resident in graduate medical education. "The reason this study is so exciting is because the RadGPT-generated materials were generally deemed safe without further modification. This means that RadGPT is a promising tool that may, after further testing and validation, directly educate patients about their urgent or incidental imaging findings in real time at the patient's convenience." While this LLM still has to be tested in a clinical setting, Langlotz believes the LLMs that are the underpinnings of this technology will not only benefit patients in getting answers to common medical questions, but also radiologists, who can either be more productive or be able to take breaks to reduce burnout. "If you look at self-reports of cognitive load -- the amount of work your brain is doing throughout a day -- radiology is right at the top of that list."
Share
Share
Copy Link
Stanford researchers have developed RadGPT, a large language model that translates complex radiology reports into easy-to-understand explanations for patients, potentially improving doctor-patient communication and patient engagement in healthcare.
Stanford University researchers have introduced a groundbreaking large language model named RadGPT, designed to bridge the gap between complex medical jargon and patient understanding in radiology reports. This innovative AI tool aims to enhance doctor-patient communication and empower patients to better comprehend their medical test results
1
2
.Radiology reports often contain technical terms that are difficult for patients to decipher. For instance, a diagnosis of "mild intrasubstance degeneration of the posterior horn of the medial meniscus" in a knee MRI report can be confusing for those without medical training. RadGPT addresses this issue by providing simpler explanations, such as comparing the knee's meniscus to a cushion that has gone slightly flat but remains functional
1
2
.Source: Medical Xpress
The AI model extracts key concepts from radiologists' reports and generates easy-to-understand explanations along with potential follow-up questions. This process helps patients grasp the meaning of their test results and encourages more informed discussions with their healthcare providers
1
2
.To create RadGPT, the Stanford team analyzed 30 sample radiology reports, extracting 150 concepts and developing explanations and question-answer pairs for each. The system's safety was evaluated by five radiologists, who determined that RadGPT is unlikely to produce harmful or inaccurate explanations
1
2
.Curtis Langlotz, a Stanford professor and senior author of the study, believes that tools like RadGPT could significantly improve patient engagement in their care. With the 21st Century Cures Act granting patients electronic access to their radiology reports since 2021, RadGPT could play a crucial role in helping patients understand and act upon their medical information
1
2
.Related Stories
While RadGPT shows promise, it still relies on human radiologists to generate initial reports. The AI cannot yet interpret raw scans independently. However, researchers are optimistic about its potential to enhance patient education and reduce the cognitive load on radiologists
1
2
.Source: Stanford News
Before widespread implementation, RadGPT will undergo further testing in clinical settings. Sanna Herwald, the study's lead author, emphasizes the importance of safety in healthcare technology and expresses excitement about RadGPT's potential to educate patients about their imaging findings in real-time
1
2
.As AI continues to evolve in the medical field, tools like RadGPT represent a significant step towards more accessible and understandable healthcare information, potentially leading to better patient outcomes and more efficient medical practices.
Summarized by
Navi
06 Jun 2025โขHealth
04 Apr 2025โขScience and Research
16 Oct 2024โขScience and Research
1
Business and Economy
2
Business and Economy
3
Policy and Regulation