Curated by THEOUTPOST
On Tue, 5 Nov, 4:03 PM UTC
2 Sources
[1]
With Google's Latest Breakthrough, AI Reaches the Core of 3D Medical Imaging
Google's CT Foundation creates a 1,408-dimensional vector that captures key details about organs, tissues, and abnormalities. AI is actively transforming the healthcare sector, especially medical imaging. This data-driven approach is helping doctors diagnose and treat patients quickly and more accurately. The technology speeds up the imaging process and supports personalised treatment plans for each patient. What is AI's role in the imaging process? Through segmentation, specific structures are highlighted in images, aiding in the early and accurate detection of diseases. Preprocessing techniques further improve image quality by reconstructing incomplete or noisy computed tomography (CT) data. Beyond diagnostics, predictive analytics enables doctors to anticipate the rate of progression of a disease and suggest potential treatments. Quality control safeguards ensure images are clear and artefact-free for reliable use. Meanwhile, continuous imaging allows for ongoing monitoring, so treatments can be adjusted as and when needed. CT scans, a type of 3D imaging, play a crucial role in detecting conditions like lung cancer, neurological issues, and trauma. Over 70 million exams are conducted annually in the US alone. Tech giant Google recently announced the release of CT Foundation, its new medical foundation tool for 3D CT volumes. According to an official blog post, the CT Foundation is leveraging Google's prior expertise in 2D medical imaging for chest radiographs, dermatology, and digital pathology. This tool, built on VideoCoCa, simplifies processing DICOM format CT scans by creating a 1,408-dimensional vector that captures key details about organs, tissues, and abnormalities. CT Foundation allows researchers to train AI models more efficiently with less data, significantly reducing the computational resources required, as compared to traditional methods. Researchers can also use its API for free. Integration of AI in the complex task of interpreting 3D CT scans provides advanced tools for efficient analysis, helping radiologists spot even the smallest abnormalities that might have otherwise been missed. For example, AI-driven methods now streamline blood flow assessment in stroke patients, providing real-time insights to accelerate treatment decisions during instances of critical care. During COVID-19 research conducted by Rafał Obuchowicz and two others, 3D CT analysis revealed fibrotic lung changes in cancer patients post-infection. This enhanced the general understanding of infection-induced vulnerabilities. Generative Adversarial Networks (GANs) are used to enhance CT image reconstruction to fill in missing data. Additionally, UnetU, a deep learning tool, denoises images and enhances material differentiation, reducing processing time and supporting more detailed analysis. This segmentation of deep learning provides thorough diagnostic insights, replacing manual annotation and increasing workflow efficiency, ultimately improving patient outcomes through enhanced diagnostic clarity. According to the National Library of Medicine, LLMs have the capacity to enhance transfer learning efficiency, integrate multimodal data, facilitate clinical interactivity, and optimise cost-efficiency in healthcare. The paper also states that transformer architecture, which is key for LLMs, is gaining prominence in the medical domain. Potential flow chart for clinical application of LLMs by NIH According to another paper published by the National Library of Medicine earlier this year, ChatGPT plays an essential role in enhancing clinical workflow efficiency and diagnosis accuracy. It caters to multiple areas of medical imaging, such as image captioning, report generation and classification, extracting findings from reports, answering visual questions, and making interpretable diagnoses. The report also establishes that collaboration between researchers and clinicians is needed to fully leverage the use of LLMs in imaging. LLMs in Radiology In January this year, the Radiological Society of North America released a paper about Chatbots and Large Language Models in Radiology. The paper discusses LLMs, including multimodal models that consider both text and images. "Such models have the potential to transform radiology practice and research but must be optimised and validated before implementation in supervised settings," it states. The paper also mentions hallucinations, knowledge cutoff dates, poor complex reasoning, a tendency to perpetuate bias, and stochasticity as some of the major limitations in radiology currently. Two UCLA researchers, Eran Halperin and Oren Avram, recently developed an AI-powered foundation model that can accurately analyse medical 3D imagery. The model, SLIViT (Slice Integration by Vision Transformer), can analyse MRIs and CT scans in much less time than human experts. Google's CT Foundation entered a domain already traversed by Microsoft with its Project InnerEye, an open-source software for medical imaging AI used for deep learning research. This project was also covered in Microsoft's blog on 'biomedical imaging', which addressed the challenges of speed, quantification, and cost of medical imaging using AI. The blog also discusses various research focus areas, namely machine learning for image reconstruction, radiotherapy image segmentation, ophthalmology, digital pathology, pandemic preparedness, and Microsoft's Connected Imaging Instrument project. Along with the tool's launch, Google also shared a Python Notebook, a demo notebook available to train models, including one for lung cancer detection, using public data. Google also tested this model across six clinical tasks relevant to the head, chest, and abdominopelvic regions. Each test involved detecting conditions like intracranial haemorrhage, lung cancer, and multiple abdominal abnormalities. The results indicated that models had over 0.8 area under curve (AUC) scores even with limited training data. AUC is measured between 0.0 and 1.0, where 1.0 is a perfect model and 0.5 represents random chance. Regardless, this tool is not ready for medical diagnosis yet. "We developed this model for research purposes only and, as such, it may not be used in patient care and is not intended to be used to diagnose, cure, mitigate, treat, or prevent a disease. For example, the model and any embeddings may not be used as a medical device," Google said. As machine learning evolves into AI and continues to develop, it promises more accurate diagnoses, fewer mistakes, and better outcomes, ultimately elevating medical imaging to unprecedented levels.
[2]
Google Expands to 3D Imaging -- Who Needs Radiologists?
Google's CT Foundation creates a 1,408-dimensional vector that captures key details and simplifies analysing 3D imaging. CT scans, a type of 3D imaging, play a crucial role in detecting conditions like lung cancer, neurological issues, and trauma. Over 70 million exams are conducted annually in the US alone. Tech giant Google recently announced the release of CT Foundation, its new medical foundation tool for 3D CT volumes -- because, apparently, who needs radiologists now? Traditional radiologists examine 3D scans by breaking them down into 2D slices, checking each one for signs of disease. They use their knowledge of how healthy organs and tissues look to spot any abnormalities. In some cases, they view these scans in augmented reality (AR) on a smartphone, which can also make the images easier for patients to understand. Regardless, by utilising AI, particularly a subfield called computer vision, these practitioners can analyse images to find patterns and identify abnormalities much faster than traditional methods. Hence, AI involvement in this sector increased the pace and efficiency of medical diagnoses through 3D imaging scans. According to an official blog post, the CT Foundation is leveraging Google's prior expertise in 2D medical imaging for chest radiographs, dermatology, and digital pathology. This tool, built on VideoCoCa, simplifies processing DICOM format CT scans by creating a 1,408-dimensional vector that captures key details about organs, tissues, and abnormalities. CT Foundation allows researchers to train AI models more efficiently with less data, significantly reducing the computational resources required, as compared to traditional methods. Researchers can also use its API for free. Integration of AI in the complex task of interpreting 3D CT scans provides advanced tools for efficient analysis, helping radiologists spot even the smallest abnormalities that might have otherwise been missed. For example, AI-driven methods now streamline blood flow assessment in stroke patients, providing real-time insights to accelerate treatment decisions during instances of critical care. During COVID-19 research conducted by Rafał Obuchowicz and two others, 3D CT analysis revealed fibrotic lung changes in cancer patients post-infection. This enhanced the general understanding of infection-induced vulnerabilities. Generative Adversarial Networks (GANs) are used to enhance CT image reconstruction to fill in missing data. Additionally, UnetU, a deep learning tool, denoises images and enhances material differentiation, reducing processing time and supporting more detailed analysis. This segmentation of deep learning provides thorough diagnostic insights, replacing manual annotation and increasing workflow efficiency, ultimately improving patient outcomes through enhanced diagnostic clarity. According to the National Library of Medicine, LLMs have the capacity to enhance transfer learning efficiency, integrate multimodal data, facilitate clinical interactivity, and optimise cost-efficiency in healthcare. The paper also states that transformer architecture, which is key for LLMs, is gaining prominence in the medical domain. Potential flow chart for clinical application of LLMs by NIH According to another paper published by the National Library of Medicine earlier this year, ChatGPT plays an essential role in enhancing clinical workflow efficiency and diagnosis accuracy. It caters to multiple areas of medical imaging, such as image captioning, report generation and classification, extracting findings from reports, answering visual questions, and making interpretable diagnoses. The report also establishes that collaboration between researchers and clinicians is needed to fully leverage the use of LLMs in imaging. LLMs in Radiology In January this year, the Radiological Society of North America released a paper about Chatbots and Large Language Models in Radiology. The paper discusses LLMs, including multimodal models that consider both text and images. "Such models have the potential to transform radiology practice and research but must be optimised and validated before implementation in supervised settings," it states. The paper also mentions hallucinations, knowledge cutoff dates, poor complex reasoning, a tendency to perpetuate bias, and stochasticity as some of the major limitations in radiology currently. Two UCLA researchers, Eran Halperin and Oren Avram, recently developed an AI-powered foundation model that can accurately analyse medical 3D imagery. The model, SLIViT (Slice Integration by Vision Transformer), can analyse MRIs and CT scans in much less time than human experts. Google's CT Foundation entered a domain already traversed by Microsoft with its Project InnerEye, an open-source software for medical imaging AI used for deep learning research. This project was also covered in Microsoft's blog on 'biomedical imaging', which addressed the challenges of speed, quantification, and cost of medical imaging using AI. The blog also discusses various research focus areas, namely machine learning for image reconstruction, radiotherapy image segmentation, ophthalmology, digital pathology, pandemic preparedness, and Microsoft's Connected Imaging Instrument project. Along with the tool's launch, Google also shared a Python Notebook, a demo notebook available to train models, including one for lung cancer detection, using public data. Google also tested this model across six clinical tasks relevant to the head, chest, and abdominopelvic regions. Each test involved detecting conditions like intracranial haemorrhage, lung cancer, and multiple abdominal abnormalities. The results indicated that models had over 0.8 area under curve (AUC) scores even with limited training data. AUC is measured between 0.0 and 1.0, where 1.0 is a perfect model and 0.5 represents random chance. Regardless, this tool is not ready for medical diagnosis yet. "We developed this model for research purposes only and, as such, it may not be used in patient care and is not intended to be used to diagnose, cure, mitigate, treat, or prevent a disease. For example, the model and any embeddings may not be used as a medical device," Google said. As machine learning evolves into AI and continues to develop, it promises more accurate diagnoses, fewer mistakes, and better outcomes, ultimately elevating medical imaging to unprecedented levels.
Share
Share
Copy Link
Google introduces CT Foundation, a new AI tool for analyzing 3D CT scans, potentially revolutionizing medical imaging and diagnosis. This development highlights the growing role of AI in healthcare, particularly in radiology.
Google has announced the release of CT Foundation, a groundbreaking AI tool designed to revolutionize the analysis of 3D CT scans in medical imaging [1][2]. This development marks a significant advancement in the application of artificial intelligence to healthcare, particularly in the field of radiology.
CT Foundation, built on Google's VideoCoCa technology, simplifies the processing of DICOM format CT scans by creating a 1,408-dimensional vector that captures key details about organs, tissues, and abnormalities [1]. This innovative approach allows researchers to train AI models more efficiently with less data, significantly reducing the computational resources required compared to traditional methods [2].
The integration of AI in interpreting 3D CT scans provides advanced tools for efficient analysis, helping radiologists identify even the smallest abnormalities that might otherwise be missed [1]. AI-driven methods are now streamlining various aspects of medical imaging, including:
According to the National Library of Medicine, Large Language Models (LLMs) have the potential to enhance transfer learning efficiency, integrate multimodal data, and optimize cost-efficiency in healthcare [1]. ChatGPT, for instance, is playing an essential role in enhancing clinical workflow efficiency and diagnosis accuracy across multiple areas of medical imaging [1].
Despite the promising advancements, the Radiological Society of North America highlights several limitations in the current application of LLMs in radiology, including:
Google's CT Foundation enters a field already explored by other tech giants:
Google has tested CT Foundation across six clinical tasks relevant to the head, chest, and abdominopelvic regions. The results showed that models achieved over 0.9 area under curve (AUC) scores even with limited training data [2]. To promote accessibility and further research, Google has made the CT Foundation API available for free and shared a Python Notebook for training models, including one for lung cancer detection using public data [1][2].
As AI continues to transform the healthcare sector, particularly in medical imaging, collaborations between researchers, clinicians, and tech companies will be crucial to fully leverage these advancements and improve patient outcomes through enhanced diagnostic clarity and efficiency.
Reference
[1]
Analytics India Magazine
|With Google's Latest Breakthrough, AI Reaches the Core of 3D Medical Imaging[2]
A new AI model, BiomedGPT, has been developed as a generalist vision-language foundation model capable of performing various biomedical tasks. This open-source tool combines image and text understanding to support a wide range of medical and scientific applications.
2 Sources
Researchers from LMU, TU Berlin, and Charité have developed a novel AI tool that can detect rare gastrointestinal diseases using imaging data, potentially improving diagnostic accuracy and easing pathologists' workloads.
3 Sources
A team from the University of Pennsylvania has introduced a novel AI training approach called Knowledge-enhanced Bottlenecks (KnoBo) that emulates the education pathway of human physicians for medical image analysis, potentially improving accuracy and interpretability in AI-assisted diagnostics.
2 Sources
Harvard Medical School researchers have developed CHIEF, a versatile AI model for cancer diagnosis and prognosis, achieving up to 96% accuracy across 19 cancer types. This ChatGPT-like model outperforms existing AI systems in various cancer-related tasks.
2 Sources
A new study reveals that a ChatGPT like AI language model can effectively assist in cancer treatment decisions, potentially improving patient outcomes and survival rates. This development marks a significant step in the integration of AI in healthcare.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved