Curated by THEOUTPOST
On Wed, 16 Oct, 8:05 AM UTC
2 Sources
[1]
Training medical image analysis AI with knowledge, not shortcuts
When human radiologists examine scans, they peer through the lens of decades of training. Extending from college to medical school to residency, the road that concludes in a physician interpreting, say, an X-ray, includes thousands upon thousands of hours of education, both academic and practical, from studying for licensing exams to spending years as a resident. At present, the training pathway for artificial intelligence (AI) to interpret medical images is much more straightforward: show the AI medical images labeled with features of interest, like cancerous lesions, in large enough quantities for the system to identify patterns that allow it to "see" those features in unlabeled images. Despite more than 14,000 academic papers having been published on AI and radiology in the last decade, the results are middling at best. In 2018, researchers at Stanford realized that an AI they trained to identify skin lesions erroneously flagged images that contained rulers because most of the images of malignant lesions also had rulers in them. "Neural networks easily overfit on spurious correlations," says Mark Yatskar, Assistant Professor in Computer and Information Science (CIS), referring to the AI architecture that emulates biological neurons and powers tools as varied as ChatGPT and image-recognition software. "Instead of how a human makes the decisions, it will take shortcuts." In a new paper, to be shared at NeurIPS 2024 as a spotlight, Yatskar, together with Chris Callison-Burch, Professor in CIS, and first author Yue Yang, a doctoral student advised by Callison-Burch and Yatskar, introduces a novel means of developing neural networks for medical image recognition by emulating the training pathway of human physicians. The paper is published on the arXiv preprint server. "Generally, with AI systems, the procedure is to throw a lot of data at the AI system, and it figures it out," says Yatskar. "This is actually very unlike how humans learn -- a physician has a multi-step process for their education." The team's new method effectively takes AI to medical school by providing a set body of medical knowledge culled from textbooks, from PubMed, the academic database of the National Library of Medicine, and from StatPearls, an online company that provides practice exam questions for medical practitioners. "Doctors spend years in medical school learning from textbooks and in classrooms before they begin their clinical training in earnest," points out Yatskar. "We're trying to mirror that process." The new approach, deemed Knowledge-enhanced Bottlenecks (KnoBo), essentially requires that AI base decisions on established medical knowledge. "When reading an X-ray, medical students and doctors ask, is the lung clear, is the heart a normal size," Yang says. "The model will rely on similar factors to the ones humans use when making a decision." The upshot is that models trained using KnoBo are not only more accurate at tasks like identifying COVID patients based on lung X-rays than the current best-in-class models, they are also more interpretable: clinicians can understand why the model made a particular decision. "You will know why the system predicts this X-ray is a COVID patient -- because it has opacity in the lung," says Yang. Models trained with KnoBo are also more robust and able to handle some of the messiness of real-world data. One of the greatest assets of human doctors is that you can place them in many different contexts -- with different hospitals and different patient populations -- and expect their skills to transfer. In contrast, AI systems trained on a particular group of patients from a particular hospital rarely work well in different contexts. To assess the ability of KnoBo to help models focus on salient information, the researchers tested a wide range of neural networks on "confounded" data sets. In essence, training the models on one set of patients, where, say, all sick patients were white and healthy patients Black, and then testing the models on patients with the opposite characteristics. "The previous methods fail catastrophically," says Yang. "Using our way, we constrain the model to reasoning over those knowledge priors we learn from medical documents." Even on confounded data, models trained using KnoBo averaged 32.4% greater accuracy than neural networks finetuned on medical images. Given that the American Association of Medical Colleges (AAMC) projects a shortage of 80,000 physicians in the United States alone by 2036, the researchers hope their work will open the door to the safe application of AI in medicine. "You could really make an impact in terms of getting people help that otherwise they couldn't get because there aren't people appropriately qualified to give that help," says Yatskar.
[2]
Revolutionizing AI training by emulating physician education
University of Pennsylvania School of Engineering and Applied ScienceOct 15 2024 When human radiologists examine scans, they peer through the lens of decades of training. Extending from college to medical school to residency, the road that concludes in a physician interpreting, say, an X-ray, includes thousands upon thousands of hours of education, both academic and practical, from studying for licensing exams to spending years as a resident. At present, the training pathway for artificial intelligence (AI) to interpret medical images is much more straightforward: show the AI medical images labeled with features of interest, like cancerous lesions, in large enough quantities for the system to identify patterns that allow it to "see" those features in unlabeled images. Despite more than 14,000 academic papers having been published on AI and radiology in the last decade, the results are middling at best. In 2018, researchers at Stanford realized that an AI they trained to identify skin lesions erroneously flagged images that contained rulers because most of the images of malignant lesions also had rulers in them. "Neural networks easily overfit on spurious correlations," says Mark Yatskar, Assistant Professor in Computer and Information Science (CIS), referring to the AI architecture that emulates biological neurons and powers tools as varied as ChatGPT and image-recognition software. "Instead of how a human makes the decisions, it will take shortcuts." In a new paper, to be shared at NeurIPS 2024 as a spotlight, Yatskar, together with Chris Callison-Burch, Professor in CIS, and first author Yue Yang, a doctoral student advised by Callison-Burch and Yatskar, introduces a novel means of developing neural networks for medical image recognition by emulating the training pathway of human physicians. Generally, with AI systems, the procedure is to throw a lot of data at the AI system, and it figures it out. This is actually very unlike how humans learn -; a physician has a multi-step process for their education." Mark Yatskar, Assistant Professor in Computer and Information Science (CIS) The team's new method effectively takes AI to medical school by providing a set body of medical knowledge culled from textbooks, from PubMed, the academic database of the National Library of Medicine, and from StatPearls, an online company that provides practice exam questions for medical practitioners. "Doctors spend years in medical school learning from textbooks and in classrooms before they begin their clinical training in earnest," points out Yatskar. "We're trying to mirror that process." The new approach, deemed Knowledge-enhanced Bottlenecks (KnoBo), essentially requires that AI base decisions on established medical knowledge. "When reading an X-ray, medical students and doctors ask, is the lung clear, is the heart a normal size," Yang says. "The model will rely on similar factors to the ones humans use when making a decision." The upshot is that models trained using KnoBo are not only more accurate at tasks like identifying COVID patients based on lung X-rays than the current best-in-class models, they are also more interpretable: clinicians can understand why the model made a particular decision. "You will know why the system predicts this X-ray is a COVID patient -; because it has opacity in the lung," says Yang. Models trained with KnoBo are also more robust and able to handle some of the messiness of real-world data. One of the greatest assets of human doctors is that you can place them in many different contexts -; with different hospitals and different patient populations -; and expect their skills to transfer. In contrast, AI systems trained on a particular group of patients from a particular hospital rarely work well in different contexts. To assess the ability of KnoBo to help models focus on salient information, the researchers tested a wide range of neural networks on "confounded" data sets, in essence, training the models on one set of patients, where, say, all sick patients were white and healthy patients Black, and then testing the models on patients with the opposite characteristics. "The previous methods fail catastrophically," says Yang. "Using our way, we constrain the model to reasoning over those knowledge priors we learn from medical documents." Even on confounded data, models trained using KnoBo averaged 32.4% greater accuracy than neural networks finetuned on medical images. Given that the American Association of Medical Colleges (AAMC) projects a shortage of 80,000 physicians in the United States alone by 2036, the researchers hope their work will open the door to the safe application of AI in medicine. "You could really make an impact in terms of getting people help that otherwise they couldn't get because there aren't people appropriately qualified to give that help," says Yatskar. University of Pennsylvania School of Engineering and Applied Science
Share
Share
Copy Link
A team from the University of Pennsylvania has introduced a novel AI training approach called Knowledge-enhanced Bottlenecks (KnoBo) that emulates the education pathway of human physicians for medical image analysis, potentially improving accuracy and interpretability in AI-assisted diagnostics.
Researchers from the University of Pennsylvania's School of Engineering and Applied Science have developed a groundbreaking method for training artificial intelligence (AI) in medical image analysis. This innovative approach, called Knowledge-enhanced Bottlenecks (KnoBo), aims to mirror the extensive education process of human physicians, potentially revolutionizing AI applications in healthcare 12.
Despite the publication of over 14,000 academic papers on AI and radiology in the past decade, the results have been less than satisfactory. A notable example of AI's shortcomings occurred in 2018 when Stanford researchers discovered that an AI trained to identify skin lesions was erroneously flagging images containing rulers, as most images of malignant lesions included rulers 1.
Mark Yatskar, Assistant Professor in Computer and Information Science (CIS) at the University of Pennsylvania, explains, "Neural networks easily overfit on spurious correlations. Instead of how a human makes the decisions, it will take shortcuts" 1.
The KnoBo method effectively takes AI through a medical school-like training process. It provides a comprehensive body of medical knowledge sourced from textbooks, PubMed (the National Library of Medicine's academic database), and StatPearls, an online platform offering practice exam questions for medical practitioners 12.
Yatskar elaborates, "Doctors spend years in medical school learning from textbooks and in classrooms before they begin their clinical training in earnest. We're trying to mirror that process" 1.
Models trained using KnoBo have demonstrated superior performance in tasks such as identifying COVID-19 patients from lung X-rays compared to current best-in-class models. Importantly, these models are also more interpretable, allowing clinicians to understand the reasoning behind AI decisions 12.
Yue Yang, the first author of the study, explains, "You will know why the system predicts this X-ray is a COVID patient – because it has opacity in the lung" 1.
KnoBo-trained models have shown improved robustness in handling diverse real-world data. To test this, researchers evaluated various neural networks on "confounded" datasets, where training and testing data had opposing characteristics. KnoBo-trained models demonstrated an average of 32.4% greater accuracy than traditional neural networks in these challenging scenarios 12.
With the American Association of Medical Colleges projecting a shortage of 80,000 physicians in the United States by 2036, the researchers hope their work will pave the way for the safe and effective application of AI in medicine 12.
Yatskar concludes, "You could really make an impact in terms of getting people help that otherwise they couldn't get because there aren't people appropriately qualified to give that help" 1.
The team's research will be presented as a spotlight paper at NeurIPS 2024, highlighting its significance in the field of AI and medical imaging 12.
Reference
[1]
Medical Xpress - Medical and Health News
|Training medical image analysis AI with knowledge, not shortcuts[2]
Recent studies highlight the potential of artificial intelligence in medical settings, demonstrating improved diagnostic accuracy and decision-making. However, researchers caution about the need for careful implementation and human oversight.
2 Sources
2 Sources
A new AI model, BiomedGPT, has been developed as a generalist vision-language foundation model capable of performing various biomedical tasks. This open-source tool combines image and text understanding to support a wide range of medical and scientific applications.
2 Sources
2 Sources
Researchers develop BiomedParse, an AI model capable of analyzing nine types of medical images to predict systemic diseases, potentially revolutionizing medical diagnostics and improving efficiency for healthcare professionals.
2 Sources
2 Sources
Researchers from LMU, TU Berlin, and Charité have developed a novel AI tool that can detect rare gastrointestinal diseases using imaging data, potentially improving diagnostic accuracy and easing pathologists' workloads.
3 Sources
3 Sources
Google introduces CT Foundation, a new AI tool for analyzing 3D CT scans, potentially revolutionizing medical imaging and diagnosis. This development highlights the growing role of AI in healthcare, particularly in radiology.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved