3 Sources
[1]
New AI model performs medical image segmentation with far less data
University of California - San DiegoAug 1 2025 A new artificial intelligence (AI) tool could make it much easier-and cheaper-for doctors and researchers to train medical imaging software, even when only a small number of patient scans are available. The AI tool improves upon a process called medical image segmentation, where every pixel in an image is labeled based on what it represents-cancerous or normal tissue, for example. This process is often performed by a highly trained expert, and deep learning has shown promise in automating this labor-intensive task. The big challenge is that deep learning-based methods are data hungry-they require a large amount of pixel-by-pixel annotated images to learn, explained Li Zhang, a Ph.D. student in the Department of Electrical and Computer Engineering at the University of California San Diego. Creating such datasets demands expert labor, time and cost. And for many medical conditions and clinical settings, that level of data simply doesn't exist. To overcome this limitation, Zhang and a team of researchers led by UC San Diego electrical and computer engineering professor Pengtao Xie have developed an AI tool that can learn image segmentation from just a small number of expert-labeled samples. By doing so, it cuts down the amount of data usually required by up to 20 times. It could potentially lead to faster, more affordable diagnostic tools, especially in hospitals and clinics with limited resources. The work was published in Nature Communications. This project was born from the need to break this bottleneck and make powerful segmentation tools more practical and accessible, especially for scenarios where data are scarce." Li Zhang, first author of the study The AI tool was tested on a variety of medical image segmentation tasks. It learned to identify skin lesions in dermoscopy images; breast cancer in ultrasound scans; placental vessels in fetoscopic images; polyps in colonoscopy images; and foot ulcers in standard camera photos, just to list several examples. The method was also extended to 3D images, such as those used to map the hippocampus or liver. In settings where annotated data were extremely limited, the AI tool boosted model performance by 10 to 20% compared to existing approaches. It required 8 to 20 times less real-world training data than standard methods while often matching or outperforming them. Zhang described how this AI tool could potentially be used to help dermatologists diagnose skin cancer. Instead of gathering and labeling thousands of images, a trained expert in the clinic might only need to annotate 40, for example. The AI tool could then use this small dataset to identify suspicious lesions from a patient's dermoscopy images in real time. "It could help doctors make a faster, more accurate diagnosis," Zhang said. The system works in stages. First, it learns how to generate synthetic images from segmentation masks, which are essentially color-coded overlays that tell an algorithm which parts of an image are, say, healthy or diseased. Then, it uses that knowledge to create new, artificial image-mask pairs to augment a small dataset of real examples. A segmentation model is trained using both. Through a continuous feedback loop, the system refines the images it creates based on how well they improve the model's learning. The feedback loop is a big part of what makes this system work so well, noted Zhang. "Rather than treating data generation and segmentation model training as two separate tasks, this system is the first to integrate them together. The segmentation performance itself guides the data generation process. This ensures that the synthetic data are not just realistic, but also specifically tailored to improve the model's segmentation capabilities." Looking ahead, the team plans to make their AI tool smarter and more versatile. The researchers also plan to incorporate feedback from clinicians directly into the training process to make the generated data more relevant for real-world medical use. University of California - San Diego Journal reference: Zhang, L., et al. (2025). Generative AI enables medical image segmentation in ultra low-data regimes. Nature Communications. doi.org/10.1038/s41467-025-61754-6.
[2]
New AI tool learns to read medical images with far less data
A new artificial intelligence (AI) tool could make it much easier -- and cheaper -- for doctors and researchers to train medical imaging software, even when only a small number of patient scans are available. The AI tool improves upon a process called medical image segmentation, where every pixel in an image is labeled based on what it represents -- cancerous or normal tissue, for example. This process is often performed by a highly trained expert, and deep learning has shown promise in automating this labor-intensive task. The big challenge is that deep learning-based methods are data hungry -- they require a large amount of pixel-by-pixel annotated images to learn, explained Li Zhang, a Ph.D. student in the Department of Electrical and Computer Engineering at the University of California San Diego. Creating such datasets demands expert labor, time and cost. And for many medical conditions and clinical settings, that level of data simply doesn't exist. To overcome this limitation, Zhang and a team of researchers led by UC San Diego electrical and computer engineering professor Pengtao Xie have developed an AI tool that can learn image segmentation from just a small number of expert-labeled samples. By doing so, it cuts down the amount of data usually required by up to 20 times. It could potentially lead to faster, more affordable diagnostic tools, especially in hospitals and clinics with limited resources. The work is published in Nature Communications. "This project was born from the need to break this bottleneck and make powerful segmentation tools more practical and accessible, especially for scenarios where data are scarce," said Zhang, who is the first author of the study. The AI tool was tested on a variety of medical image segmentation tasks. It learned to identify skin lesions in dermoscopy images; breast cancer in ultrasound scans; placental vessels in fetoscopic images; polyps in colonoscopy images; and foot ulcers in standard camera photos, just to list several examples. The method was also extended to 3D images, such as those used to map the hippocampus or liver. In settings where annotated data were extremely limited, the AI tool boosted model performance by 10 to 20% compared to existing approaches. It required 8 to 20 times less real-world training data than standard methods while often matching or outperforming them. Zhang described how this AI tool could potentially be used to help dermatologists diagnose skin cancer. Instead of gathering and labeling thousands of images, a trained expert in the clinic might only need to annotate 40, for example. The AI tool could then use this small dataset to identify suspicious lesions from a patient's dermoscopy images in real time. "It could help doctors make a faster, more accurate diagnosis," Zhang said. The system works in stages. First, it learns how to generate synthetic images from segmentation masks, which are essentially color-coded overlays that tell an algorithm which parts of an image are, say, healthy or diseased. Then, it uses that knowledge to create new, artificial image-mask pairs to augment a small dataset of real examples. A segmentation model is trained using both. Through a continuous feedback loop, the system refines the images it creates based on how well they improve the model's learning. The feedback loop is a big part of what makes this system work so well, noted Zhang. "Rather than treating data generation and segmentation model training as two separate tasks, this system is the first to integrate them together. The segmentation performance itself guides the data generation process. This ensures that the synthetic data are not just realistic, but also specifically tailored to improve the model's segmentation capabilities." Looking ahead, the team plans to make their AI tool smarter and more versatile. The researchers also plan to incorporate feedback from clinicians directly into the training process to make the generated data more relevant for real-world medical use.
[3]
New AI Tool Learns to Read Medical Images With Far Less Data | Newswise
Newswise -- A new artificial intelligence (AI) tool could make it much easier -- and cheaper -- for doctors and researchers to train medical imaging software, even when only a small number of patient scans are available. The AI tool improves upon a process called medical image segmentation, where every pixel in an image is labeled based on what it represents -- cancerous or normal tissue, for example. This process is often performed by a highly trained expert, and deep learning has shown promise in automating this labor-intensive task. The big challenge is that deep learning-based methods are data hungry -- they require a large amount of pixel-by-pixel annotated images to learn, explained Li Zhang, a Ph.D. student in the Department of Electrical and Computer Engineering at the University of California San Diego. Creating such datasets demands expert labor, time and cost. And for many medical conditions and clinical settings, that level of data simply doesn't exist. To overcome this limitation, Zhang and a team of researchers led by UC San Diego electrical and computer engineering professor Pengtao Xie have developed an AI tool that can learn image segmentation from just a small number of expert-labeled samples. By doing so, it cuts down the amount of data usually required by up to 20 times. It could potentially lead to faster, more affordable diagnostic tools, especially in hospitals and clinics with limited resources. The work was published in Nature Communications. "This project was born from the need to break this bottleneck and make powerful segmentation tools more practical and accessible, especially for scenarios where data are scarce," said Zhang, who is the first author of the study. The AI tool was tested on a variety of medical image segmentation tasks. It learned to identify skin lesions in dermoscopy images; breast cancer in ultrasound scans; placental vessels in fetoscopic images; polyps in colonoscopy images; and foot ulcers in standard camera photos, just to list several examples. The method was also extended to 3D images, such as those used to map the hippocampus or liver. In settings where annotated data were extremely limited, the AI tool boosted model performance by 10 to 20% compared to existing approaches. It required 8 to 20 times less real-world training data than standard methods while often matching or outperforming them. Zhang described how this AI tool could potentially be used to help dermatologists diagnose skin cancer. Instead of gathering and labeling thousands of images, a trained expert in the clinic might only need to annotate 40, for example. The AI tool could then use this small dataset to identify suspicious lesions from a patient's dermoscopy images in real time. "It could help doctors make a faster, more accurate diagnosis," Zhang said. The system works in stages. First, it learns how to generate synthetic images from segmentation masks, which are essentially color-coded overlays that tell an algorithm which parts of an image are, say, healthy or diseased. Then, it uses that knowledge to create new, artificial image-mask pairs to augment a small dataset of real examples. A segmentation model is trained using both. Through a continuous feedback loop, the system refines the images it creates based on how well they improve the model's learning. The feedback loop is a big part of what makes this system work so well, noted Zhang. "Rather than treating data generation and segmentation model training as two separate tasks, this system is the first to integrate them together. The segmentation performance itself guides the data generation process. This ensures that the synthetic data are not just realistic, but also specifically tailored to improve the model's segmentation capabilities." Looking ahead, the team plans to make their AI tool smarter and more versatile. The researchers also plan to incorporate feedback from clinicians directly into the training process to make the generated data more relevant for real-world medical use. Full study: "Generative AI enables medical image segmentation in ultra low-data regimes" This work was supported by the National Science Foundation (IIS2405974 and IIS2339216) and the National Institutes of Health (R35GM157217 and R21GM154171).
Share
Copy Link
Researchers at UC San Diego have developed an AI tool that can perform medical image segmentation with far less data than traditional methods, potentially making diagnostic tools faster and more affordable.
Researchers at the University of California San Diego have developed a groundbreaking artificial intelligence (AI) tool that could revolutionize medical image segmentation. This innovative system can learn to analyze medical images using significantly less data than traditional methods, potentially making diagnostic tools faster and more affordable, especially in resource-limited settings 123.
Medical image segmentation, a crucial process in which every pixel in an image is labeled to identify specific features such as cancerous or normal tissue, has long been a labor-intensive task performed by highly trained experts. While deep learning has shown promise in automating this process, it typically requires large amounts of annotated data to function effectively 1.
Li Zhang, a Ph.D. student in UC San Diego's Department of Electrical and Computer Engineering and first author of the study, explained the core issue: "The big challenge is that deep learning-based methods are data hungry -- they require a large amount of pixel-by-pixel annotated images to learn" 2. This data scarcity has been a significant bottleneck in developing AI tools for medical imaging, particularly for rare conditions or in clinical settings with limited resources.
To address this challenge, Zhang and a team led by Professor Pengtao Xie have created an AI tool that can learn image segmentation from a small number of expert-labeled samples. This innovative approach reduces the amount of data required by up to 20 times compared to standard methods 123.
Source: Tech Xplore
The system works in stages:
The AI tool has been tested on a variety of medical image segmentation tasks, including:
In settings with extremely limited annotated data, the AI tool boosted model performance by 10 to 20% compared to existing approaches. It required 8 to 20 times less real-world training data than standard methods while often matching or outperforming them 123.
Zhang illustrated a potential application in dermatology: "Instead of gathering and labeling thousands of images, a trained expert in the clinic might only need to annotate 40, for example. The AI tool could then use this small dataset to identify suspicious lesions from a patient's dermoscopy images in real time. It could help doctors make a faster, more accurate diagnosis" 2.
A key innovation in this system is its integrated approach. Zhang noted, "Rather than treating data generation and segmentation model training as two separate tasks, this system is the first to integrate them together. The segmentation performance itself guides the data generation process. This ensures that the synthetic data are not just realistic, but also specifically tailored to improve the model's segmentation capabilities" 123.
Looking ahead, the research team plans to enhance the AI tool's intelligence and versatility. They also aim to incorporate feedback from clinicians directly into the training process, making the generated data more relevant for real-world medical applications 123.
This groundbreaking work, published in Nature Communications, was supported by the National Science Foundation and the National Institutes of Health 3. It represents a significant step forward in making powerful AI-driven medical imaging tools more accessible and practical, particularly in scenarios where data are scarce.
Apple forms a new team to develop an in-house AI chatbot and search experience, aiming to compete with ChatGPT and revitalize its AI efforts.
5 Sources
Technology
6 hrs ago
5 Sources
Technology
6 hrs ago
Mental health professionals raise concerns about the growing trend of young people turning to AI chatbots for emotional support, warning of potential risks to mental health and social skills development.
5 Sources
Health
14 hrs ago
5 Sources
Health
14 hrs ago
Perplexity CEO Aravind Srinivas claims their new AI browser, Comet, can automate recruiter and administrative assistant roles with a single prompt, potentially disrupting white-collar jobs.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
Samsung has announced plans to release a tri-fold smartphone and an XR headset by the end of 2025, showcasing its commitment to innovative form factors and AI-powered devices.
2 Sources
Technology
2 days ago
2 Sources
Technology
2 days ago
The U.S. Army has consolidated multiple contracts into a single $10 billion deal with Palantir Technologies, streamlining procurement for AI and data integration tools over the next decade.
5 Sources
Business and Economy
2 days ago
5 Sources
Business and Economy
2 days ago