Curated by THEOUTPOST
On Tue, 10 Sept, 12:06 AM UTC
3 Sources
[1]
A fast and flexible approach to help doctors annotate medical scans
To the untrained eye, a medical image like an MRI or X-ray appears to be a murky collection of black-and-white blobs. It can be a struggle to decipher where one structure (like a tumor) ends and another begins. When trained to understand the boundaries of biological structures, AI systems can segment (or delineate) regions of interest that doctors and biomedical workers want to monitor for diseases and other abnormalities. Instead of losing precious time tracing anatomy by hand across many images, an artificial assistant could do that for them. The catch? Researchers and clinicians must label countless images to train their AI system before it can accurately segment. For example, you'd need to annotate the cerebral cortex in numerous MRI scans to train a supervised model to understand how the cortex's shape can vary in different brains. Sidestepping such tedious data collection, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital (MGH), and Harvard Medical School have developed the interactive "ScribblePrompt" framework: a flexible tool that can help rapidly segment any medical image, even types it hasn't seen before. Instead of having humans mark up each picture manually, the team simulated how users would annotate over 50,000 scans, including MRIs, ultrasounds, and photographs, across structures in the eyes, cells, brains, bones, skin, and more. To label all those scans, the team used algorithms to simulate how humans would scribble and click on different regions in medical images. In addition to commonly labeled regions, the team also used superpixel algorithms, which find parts of the image with similar values, to identify potential new regions of interest to medical researchers and train ScribblePrompt to segment them. This synthetic data prepared ScribblePrompt to handle real-world segmentation requests from users. "AI has significant potential in analyzing images and other high-dimensional data to help humans do things more productively," says MIT PhD student Hallee Wong SM '22, the lead author on a new paper about ScribblePrompt and a CSAIL affiliate. "We want to augment, not replace, the efforts of medical workers through an interactive system. ScribblePrompt is a simple model with the efficiency to help doctors focus on the more interesting parts of their analysis. It's faster and more accurate than comparable interactive segmentation methods, reducing annotation time by 28 percent compared to Meta's Segment Anything Model (SAM) framework, for example." ScribblePrompt's interface is simple: Users can scribble across the rough area they'd like segmented, or click on it, and the tool will highlight the entire structure or background as requested. For example, you can click on individual veins within a retinal (eye) scan. ScribblePrompt can also mark up a structure given a bounding box. Then, the tool can make corrections based on the user's feedback. If you wanted to highlight a kidney in an ultrasound, you could use a bounding box, and then scribble in additional parts of the structure if ScribblePrompt missed any edges. If you wanted to edit your segment, you could use a "negative scribble" to exclude certain regions. These self-correcting, interactive capabilities made ScribblePrompt the preferred tool among neuroimaging researchers at MGH in a user study. 93.8 percent of these users favored the MIT approach over the SAM baseline in improving its segments in response to scribble corrections. As for click-based edits, 87.5 percent of the medical researchers preferred ScribblePrompt. ScribblePrompt was trained on simulated scribbles and clicks on 54,000 images across 65 datasets, featuring scans of the eyes, thorax, spine, cells, skin, abdominal muscles, neck, brain, bones, teeth, and lesions. The model familiarized itself with 16 types of medical images, including microscopies, CT scans, X-rays, MRIs, ultrasounds, and photographs. "Many existing methods don't respond well when users scribble across images because it's hard to simulate such interactions in training. For ScribblePrompt, we were able to force our model to pay attention to different inputs using our synthetic segmentation tasks," says Wong. "We wanted to train what's essentially a foundation model on a lot of diverse data so it would generalize to new types of images and tasks." After taking in so much data, the team evaluated ScribblePrompt across 12 new datasets. Although it hadn't seen these images before, it outperformed four existing methods by segmenting more efficiently and giving more accurate predictions about the exact regions users wanted highlighted. "Segmentation is the most prevalent biomedical image analysis task, performed widely both in routine clinical practice and in research -- which leads to it being both very diverse and a crucial, impactful step," says senior author Adrian Dalca SM '12, PhD '16, CSAIL research scientist and assistant professor at MGH and Harvard Medical School. "ScribblePrompt was carefully designed to be practically useful to clinicians and researchers, and hence to substantially make this step much, much faster." "The majority of segmentation algorithms that have been developed in image analysis and machine learning are at least to some extent based on our ability to manually annotate images," says Harvard Medical School professor in radiology and MGH neuroscientist Bruce Fischl, who was not involved in the paper. "The problem is dramatically worse in medical imaging in which our 'images' are typically 3D volumes, as human beings have no evolutionary or phenomenological reason to have any competency in annotating 3D images. ScribblePrompt enables manual annotation to be carried out much, much faster and more accurately, by training a network on precisely the types of interactions a human would typically have with an image while manually annotating. The result is an intuitive interface that allows annotators to naturally interact with imaging data with far greater productivity than was previously possible." Wong and Dalca wrote the paper with two other CSAIL affiliates: John Guttag, the Dugald C. Jackson Professor of EECS at MIT and CSAIL principal investigator; and MIT PhD student Marianne Rakic SM '22. Their work was supported, in part, by Quanta Computer Inc., the Eric and Wendy Schmidt Center at the Broad Institute, the Wistron Corp., and the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health, with hardware support from the Massachusetts Life Sciences Center. Wong and her colleagues' work will be presented at the 2024 European Conference on Computer Vision and was presented as an oral talk at the DCAMI workshop at the Computer Vision and Pattern Recognition Conference earlier this year. They were awarded the Bench-to-Bedside Paper Award at the workshop for ScribblePrompt's potential clinical impact.
[2]
Interactive AI framework provides fast and flexible approach to help doctors annotate medical scans
To the untrained eye, a medical image like an MRI or X-ray appears to be a murky collection of black-and-white blobs. It can be a struggle to decipher where one structure (like a tumor) ends and another begins. When trained to understand the boundaries of biological structures, AI systems can segment (or delineate) regions of interest that doctors and biomedical workers want to monitor for diseases and other abnormalities. Instead of losing precious time tracing anatomy by hand across many images, an artificial assistant could do that for them. The catch? Researchers and clinicians must label countless images to train their AI system before it can accurately segment. For example, you'd need to annotate the cerebral cortex in numerous MRI scans to train a supervised model to understand how the cortex's shape can vary in different brains. Sidestepping such tedious data collection, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital (MGH), and Harvard Medical School have developed the interactive "ScribblePrompt" framework: a flexible tool that can help rapidly segment any medical image, even types it hasn't seen before. Instead of having humans mark up each picture manually, the team simulated how users would annotate over 50,000 scans, including MRIs, ultrasounds, and photographs, across structures in the eyes, cells, brains, bones, skin, and more. To label all those scans, the team used algorithms to simulate how humans would scribble and click on different regions in medical images. In addition to commonly labeled regions, the team also used superpixel algorithms, which find parts of the image with similar values, to identify potential new regions of interest to medical researchers and train ScribblePrompt to segment them. This synthetic data prepared ScribblePrompt to handle real-world segmentation requests from users. "AI has significant potential in analyzing images and other high-dimensional data to help humans do things more productively," says MIT Ph.D. student Hallee Wong SM '22, the lead author on a paper about ScribblePrompt and a CSAIL affiliate. The findings are published on the arXiv preprint server. "We want to augment, not replace, the efforts of medical workers through an interactive system. ScribblePrompt is a simple model with the efficiency to help doctors focus on the more interesting parts of their analysis. It's faster and more accurate than comparable interactive segmentation methods, reducing annotation time by 28 percent compared to Meta's Segment Anything Model (SAM) framework, for example." ScribblePrompt's interface is simple: Users can scribble across the rough area they'd like segmented, or click on it, and the tool will highlight the entire structure or background as requested. For example, you can click on individual veins within a retinal (eye) scan. ScribblePrompt can also mark up a structure given a bounding box. Then, the tool can make corrections based on the user's feedback. If you wanted to highlight a kidney in an ultrasound, you could use a bounding box, and then scribble in additional parts of the structure if ScribblePrompt missed any edges. If you wanted to edit your segment, you could use a "negative scribble" to exclude certain regions. These self-correcting, interactive capabilities made ScribblePrompt the preferred tool among neuroimaging researchers at MGH in a user study. 93.8 percent of these users favored the MIT approach over the SAM baseline in improving its segments in response to scribble corrections. As for click-based edits, 87.5 percent of the medical researchers preferred ScribblePrompt. ScribblePrompt was trained on simulated scribbles and clicks on 54,000 images across 65 datasets, featuring scans of the eyes, thorax, spine, cells, skin, abdominal muscles, neck, brain, bones, teeth, and lesions. The model familiarized itself with 16 types of medical images, including microscopies, CT scans, X-rays, MRIs, ultrasounds, and photographs. "Many existing methods don't respond well when users scribble across images because it's hard to simulate such interactions in training. For ScribblePrompt, we were able to force our model to pay attention to different inputs using our synthetic segmentation tasks," says Wong. "We wanted to train what's essentially a foundation model on a lot of diverse data so it would generalize to new types of images and tasks." After taking in so much data, the team evaluated ScribblePrompt across 12 new datasets. Although it hadn't seen these images before, it outperformed four existing methods by segmenting more efficiently and giving more accurate predictions about the exact regions users wanted highlighted. "Segmentation is the most prevalent biomedical image analysis task, performed widely both in routine clinical practice and in research -- which leads to it being both very diverse and a crucial, impactful step," says senior author Adrian Dalca SM '12, Ph.D. '16, CSAIL research scientist and assistant professor at MGH and Harvard Medical School. "ScribblePrompt was carefully designed to be practically useful to clinicians and researchers, and hence to substantially make this step much, much faster." "The majority of segmentation algorithms that have been developed in image analysis and machine learning are at least to some extent based on our ability to manually annotate images," says Harvard Medical School professor in radiology and MGH neuroscientist Bruce Fischl, who was not involved in the paper. "The problem is dramatically worse in medical imaging in which our 'images' are typically 3D volumes, as human beings have no evolutionary or phenomenological reason to have any competency in annotating 3D images. ScribblePrompt enables manual annotation to be carried out much, much faster and more accurately, by training a network on precisely the types of interactions a human would typically have with an image while manually annotating. The result is an intuitive interface that allows annotators to naturally interact with imaging data with far greater productivity than was previously possible." Wong and Dalca wrote the paper with two other CSAIL affiliates: John Guttag, the Dugald C. Jackson Professor of EECS at MIT and CSAIL principal investigator; and MIT Ph.D. student Marianne Rakic SM '22. Their work was supported, in part, by Quanta Computer Inc., the Eric and Wendy Schmidt Center at the Broad Institute, the Wistron Corp., and the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health, with hardware support from the Massachusetts Life Sciences Center. Wong and her colleagues' work will be presented at the 2024 European Conference on Computer Vision and was presented as an oral talk at the DCAMI workshop at the Computer Vision and Pattern Recognition Conference earlier this year. They were awarded the Bench-to-Bedside Paper Award at the workshop for ScribblePrompt's potential clinical impact.
[3]
MIT's new AI tool cuts medical imaging annotation time by 28%
When AI systems are trained to understand the boundaries of biological structures, they can segment (or delineate) regions of interest that doctors and biomedical workers want to monitor for diseases and other abnormalities. Instead of wasting time manually tracing anatomy across multiple images, an artificial assistant could handle that task. Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital (MGH), and Harvard Medical School have created an interactive tool called the "ScribblePrompt" framework. This tool can quickly segment any medical image, even types it hasn't seen before, without tedious data collection. Instead of manually marking up each picture, the team simulated how users would annotate over 50,000 scans, including MRIs, ultrasounds, and photographs, across structures in the eyes, cells, brains, bones, skin, and more. The team utilized algorithms to annotate all those scans, replicating how humans would annotate and click on various areas in medical images. In addition to commonly labeled regions, the team utilized superpixel algorithms to identify potential new regions of interest for medical researchers and train ScribblePrompt to segment them.
Share
Share
Copy Link
MIT researchers have developed ScribblePrompt, an AI-powered tool that significantly speeds up medical image annotation. This interactive framework could transform how doctors analyze and annotate medical scans, potentially improving patient care and reducing workload.
Researchers at the Massachusetts Institute of Technology (MIT) have unveiled a groundbreaking AI-powered tool called ScribblePrompt, designed to revolutionize the way medical professionals annotate and analyze medical scans 1. This innovative framework promises to dramatically reduce the time and effort required for image annotation, a crucial step in medical diagnosis and treatment planning.
Medical image annotation has long been a time-consuming and labor-intensive process for healthcare professionals. Traditionally, doctors have had to manually outline and label specific areas of interest in medical scans, such as tumors or organs. This process can take anywhere from 15 to 60 minutes per image, depending on its complexity 2. With the increasing volume of medical imaging in modern healthcare, this manual approach has become a significant bottleneck in patient care and research.
ScribblePrompt leverages advanced AI algorithms to assist doctors in the annotation process. The system works by allowing medical professionals to make rough outlines or "scribbles" on areas of interest within an image. The AI then uses these initial inputs to generate more precise and comprehensive annotations 3.
Key features of ScribblePrompt include:
The development of ScribblePrompt has far-reaching implications for the medical field:
While ScribblePrompt shows great promise, researchers acknowledge that there are still challenges to overcome. Ensuring the tool's reliability across diverse medical conditions and imaging modalities is crucial. Additionally, integrating such AI-powered tools into existing healthcare workflows and addressing potential regulatory hurdles will be important steps in widespread adoption [1].
As the technology continues to evolve, the MIT team and other researchers in the field are working on refining the AI algorithms and expanding the tool's capabilities. The goal is to create a seamless, user-friendly experience that can be easily integrated into various healthcare settings, potentially transforming the landscape of medical imaging and diagnosis [2].
Reference
[1]
Massachusetts Institute of Technology
|A fast and flexible approach to help doctors annotate medical scans[2]
Medical Xpress - Medical and Health News
|Interactive AI framework provides fast and flexible approach to help doctors annotate medical scans[3]
A team from the University of Pennsylvania has introduced a novel AI training approach called Knowledge-enhanced Bottlenecks (KnoBo) that emulates the education pathway of human physicians for medical image analysis, potentially improving accuracy and interpretability in AI-assisted diagnostics.
2 Sources
Google introduces CT Foundation, a new AI tool for analyzing 3D CT scans, potentially revolutionizing medical imaging and diagnosis. This development highlights the growing role of AI in healthcare, particularly in radiology.
2 Sources
A new AI model, BiomedGPT, has been developed as a generalist vision-language foundation model capable of performing various biomedical tasks. This open-source tool combines image and text understanding to support a wide range of medical and scientific applications.
2 Sources
Researchers develop BiomedParse, an AI model capable of analyzing nine types of medical images to predict systemic diseases, potentially revolutionizing medical diagnostics and improving efficiency for healthcare professionals.
2 Sources
Researchers from LMU, TU Berlin, and Charité have developed a novel AI tool that can detect rare gastrointestinal diseases using imaging data, potentially improving diagnostic accuracy and easing pathologists' workloads.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved