Curated by THEOUTPOST
On Mon, 26 Aug, 4:01 PM UTC
3 Sources
[1]
Almost half Of FDA-approved medical AI devices lack clinical validation data
University of North Carolina Health CareAug 26 2024 Artificial intelligence (AI) has practically limitless applications in healthcare, ranging from auto-drafting patient messages in MyChart to optimizing organ transplantation and improving tumor removal accuracy. Despite their potential benefit to doctors and patients alike, these tools have been met with skepticism because of patient privacy concerns, the possibility of bias, and device accuracy. In response to the rapidly evolving use and approval of AI medical devices in healthcare, a multi-institutional team of researchers at the UNC School of Medicine, Duke University, Ally Bank, Oxford University, Colombia University, and University of Miami have been on a mission to build public trust and evaluate how exactly AI and algorithmic technologies are being approved for use in patient care. Together, Sammy Chouffani El Fassi, a MD candidate at the UNC School of Medicine and research scholar at Duke Heart Center, and Gail E. Henderson, PhD, professor at the UNC Department of Social Medicine, led a thorough analysis of clinical validation data for 500+ medical AI devices, revealing that approximately half of the tools authorized by the U.S. Food and Drug Administration (FDA) lacked reported clinical validation data. Their findings were published in Nature Medicine. Although AI device manufacturers boast of the credibility of their technology with FDA authorization, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patient data. With these findings, we hope to encourage the FDA and industry to boost the credibility of device authorization by conducting clinical validation studies on these technologies and making the results of such studies publicly available." Chouffani El Fassi, first author on the paper Since 2016, the average number of medical AI device authorizations by the FDA per year has increased from 2 to 69, indicating tremendous growth in commercialization of AI medical technologies. The majority of approved AI medical technologies are being used to assist physicians with diagnosing abnormalities in radiological imagining, pathologic slide analysis, dosing medicine, and predicting disease progression. Artificial intelligence is able to learn and perform such human-like functions by using combinations of algorithms. The technology is then given a plethora of data and sets of rules to follow, so that it can "learn" how to detect patterns and relationships with ease. From there, the device manufacturers need to ensure that the technology does not simply memorize the data previously used to train the AI, and that it can accurately produce results using never-before-seen solutions. Regulation during a rapid proliferation of AI medical devices Following the rapid proliferation of these devices and applications to the FDA, Chouffani El Fassi and Henderson et al. were curious about how clinically effective and safe the authorized devices are. Their team analyzed all submissions available on the FDA's official database, titled "Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices." "A lot of the devices that came out after 2016 were created new, or maybe they were similar to a product that already was on the market," said Henderson. "Using these hundreds of devices in this database, we wanted to determine what it really means for an AI medical device to be FDA-authorized." Of the 521 device authorizations, 144 were labeled as "retrospectively validated," 148 were "prospectively validated," and 22 were validated using randomized controlled trials. Most notably, 226 of 521 FDA-approved medical devices, or approximately 43%, lacked published clinical validation data. A few of the devices used "phantom images" or computer-generated images that were not from a real patient, which did not technically meet the requirements for clinical validation. Furthermore, the researchers found that the latest draft guidance, published by the FDA in September 2023, does not clearly distinguish between different types of clinical validation studies in its recommendations to manufacturers. Types of clinical validation and a new standard In the realm of clinical validation, there are three different methods by which researchers and device manufacturers validate the accuracy of their technologies: retrospective validation, prospective validation, and subset of prospective validation called randomized controlled trials. Retrospective validation involves feeding the AI model image data from the past, such as patient chest x-rays prior to the COVID-19 pandemic. Prospective validation, however, typically produces stronger scientific evidence because the AI device is being validated based on real-time data from patients. This is more realistic, according to the researchers, because it allows the AI to account for data variables that were not in existence when it was being trained, such as patient chest x-rays that were impacted by viruses during the COVID pandemic. Randomized controlled trials are considered the gold standard for clinical validation. This type of prospective study utilizes random assignment controls for confounding variables that would differentiate the experimental and control groups, thus isolating the therapeutic effect of the device. For example, researchers could evaluate device performance by randomly assigning patients to have their CT scans read by a radiologist (control group) versus AI (experimental group). Because retrospective studies, prospective studies, and randomized controlled trials produce various levels of scientific evidence, the researchers involved in the study recommend that the FDA and device manufactures should clearly distinguish between different types of clinical validation studies in its recommendations to manufacturers. In their Nature Medicine publication, Chouffani El Fassi and Henderson et al. lay out definitions for the clinical validation methods which can be used as a standard in the field of medical AI. "We shared our findings with directors at the FDA who oversee medical device regulation, and we expect our work will inform their regulatory decision making," said Chouffani El Fassi. "We also hope that our publication will inspire researchers and universities globally to conduct clinical validation studies on medical AI to improve the safety and effectiveness of these technologies. We're looking forward to the positive impact this project will have on patient care at a large scale." Algorithms can save lives Chouffani El Fassi is currently working with UNC cardiothoracic surgeons Aurelie Merlo and Benjamin Haithcock as well as the executive leadership team at UNC Health to implement an algorithm in their electronic health record system that automates the organ donor evaluation and referral process. In contrast to the field's rapid production of AI devices, medicine is lacking basic algorithms, such as computer software that diagnose patients using simple lab values in electronic health records. Chouffani El Fassi says this is because implementation is often expensive and requires interdisciplinary teams that have expertise in both medicine and computer science. Despite the challenge, UNC Health is on a mission to improve the organ transplant space. "Finding a potential organ donor, evaluating their organs, and then having the organ procurement organization come in and coordinate an organ transplant is a lengthy and complicated process," said Chouffani El Fassi. "If this very basic computer algorithm works, we could optimize the organ donation process. A single additional donor means several lives saved. With such a low threshold for success, we look forward giving more people a second chance at life." University of North Carolina Health Care Journal reference: Chouffani El Fassi, S., et al. (2024). Not all AI health tools with regulatory authorization are clinically validated. Nature Medicine. doi.org/10.1038/s41591-024-03203-3.
[2]
Almost half of FDA-approved AI medical devices are not trained on real patient data, research reveals
by Kendall Daniels, University of North Carolina Health Care Artificial intelligence (AI) has practically limitless applications in health care, ranging from auto-drafting patient messages in MyChart to optimizing organ transplantation and improving tumor removal accuracy. Despite their potential benefit to doctors and patients alike, these tools have been met with skepticism because of patient privacy concerns, the possibility of bias, and device accuracy. In response to the rapidly evolving use and approval of AI medical devices in health care, a multi-institutional team of researchers at the UNC School of Medicine, Duke University, Ally Bank, Oxford University, Colombia University, and University of Miami have been on a mission to build public trust and evaluate how exactly AI and algorithmic technologies are being approved for use in patient care. Together, Sammy Chouffani El Fassi, a MD candidate at the UNC School of Medicine and research scholar at Duke Heart Center, and Gail E. Henderson, Ph.D., professor at the UNC Department of Social Medicine, led a thorough analysis of clinical validation data for 500+ medical AI devices, revealing that approximately half of the tools authorized by the U.S. Food and Drug Administration (FDA) lacked reported clinical validation data. Their findings were published in Nature Medicine. "Although AI device manufacturers boast of the credibility of their technology with FDA authorization, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patient data," said Chouffani El Fassi, who was first author on the paper. "With these findings, we hope to encourage the FDA and industry to boost the credibility of device authorization by conducting clinical validation studies on these technologies and making the results of such studies publicly available." Since 2016, the average number of medical AI device authorizations by the FDA per year has increased from two to 69, indicating tremendous growth in commercialization of AI medical technologies. The majority of approved AI medical technologies are being used to assist physicians with diagnosing abnormalities in radiological imagining, pathologic slide analysis, dosing medicine, and predicting disease progression. Artificial intelligence is able to learn and perform such human-like functions by using combinations of algorithms. The technology is then given a plethora of data and sets of rules to follow, so that it can "learn" how to detect patterns and relationships with ease. From there, the device manufacturers need to ensure that the technology does not simply memorize the data previously used to train the AI, and that it can accurately produce results using never-before-seen solutions. Following the rapid proliferation of these devices and applications to the FDA, Chouffani El Fassi and Henderson et al. were curious about how clinically effective and safe the authorized devices are. Their team analyzed all submissions available on the FDA's official database, titled "Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices." "A lot of the devices that came out after 2016 were created new, or maybe they were similar to a product that already was on the market," said Henderson. "Using these hundreds of devices in this database, we wanted to determine what it really means for an AI medical device to be FDA-authorized." Of the 521 device authorizations, 144 were labeled as "retrospectively validated," 148 were "prospectively validated," and 22 were validated using randomized controlled trials. Most notably, 226 of 521 FDA-approved medical devices, or approximately 43%, lacked published clinical validation data. A few of the devices used "phantom images" or computer-generated images that were not from a real patient, which did not technically meet the requirements for clinical validation. Furthermore, the researchers found that the latest draft guidance, published by the FDA in September 2023, does not clearly distinguish between different types of clinical validation studies in its recommendations to manufacturers. In the realm of clinical validation, there are three different methods by which researchers and device manufacturers validate the accuracy of their technologies: retrospective validation, prospective validation, and a subset of prospective validation called randomized controlled trials. Retrospective validation involves feeding the AI model image data from the past, such as patient chest X-rays prior to the COVID-19 pandemic. Prospective validation, however, typically produces stronger scientific evidence because the AI device is being validated based on real-time data from patients. This is more realistic, according to the researchers, because it allows the AI to account for data variables that were not in existence when it was being trained, such as patient chest X-rays that were impacted by viruses during the COVID pandemic. Randomized controlled trials are considered the gold standard for clinical validation. This type of prospective study utilizes random assignment controls for confounding variables that would differentiate the experimental and control groups, thus isolating the therapeutic effect of the device. For example, researchers could evaluate device performance by randomly assigning patients to have their CT scans read by a radiologist (control group) versus AI (experimental group). Because retrospective studies, prospective studies, and randomized controlled trials produce various levels of scientific evidence, the researchers involved in the study recommend that the FDA and device manufactures should clearly distinguish between different types of clinical validation studies in its recommendations to manufacturers. In their Nature Medicine publication, Chouffani El Fassi, Henderson and others lay out definitions for the clinical validation methods which can be used as a standard in the field of medical AI. "We shared our findings with directors at the FDA who oversee medical device regulation, and we expect our work will inform their regulatory decision making," said Chouffani El Fassi. "We also hope that our publication will inspire researchers and universities globally to conduct clinical validation studies on medical AI to improve the safety and effectiveness of these technologies. We're looking forward to the positive impact this project will have on patient care at a large scale." Chouffani El Fassi is currently working with UNC cardiothoracic surgeons Aurelie Merlo and Benjamin Haithcock as well as the executive leadership team at UNC Health to implement an algorithm in their electronic health record system that automates the organ donor evaluation and referral process. In contrast to the field's rapid production of AI devices, medicine is lacking basic algorithms, such as computer software that diagnoses patients using simple lab values in electronic health records. Chouffani El Fassi says this is because implementation is often expensive and requires interdisciplinary teams that have expertise in both medicine and computer science. Despite the challenge, UNC Health is on a mission to improve the organ transplant space. "Finding a potential organ donor, evaluating their organs, and then having the organ procurement organization come in and coordinate an organ transplant is a lengthy and complicated process," said Chouffani El Fassi. "If this very basic computer algorithm works, we could optimize the organ donation process. A single additional donor means several lives saved. With such a low threshold for success, we look forward to giving more people a second chance at life."
[3]
Not all AI health tools with regulatory authorization are clinically validated - Nature Medicine
These concerns underscore the importance of the validation of AI technologies. Patients and providers need a gold-standard indicator of efficacy and safety for medical AI devices. Such a standard would build public trust and increase the rate of device adoption by end users. As the chief legal regulatory body for medical devices in the USA, the Food and Drug Administration (FDA) currently authorizes AI software as medical devices (SaMD)6. However, for the public to accept FDA authorization as an indication of effectiveness, the agency and device manufacturers must publish ample clinical validation data. A major obstacle to the analysis of clinical validation is the lack of standard language employed to define validation methods. The FDA, International Medical Device Regulators Forum, corporations, device manufacturers, academic societies and the research literature all define clinical, prospective and retrospective validation differently. For example, in public discourse on a draft guidance published by the FDA in 2016, "Software as a Medical Device (SaMD): Clinical Evaluation" (FDA-2016-D-2483), the Illumina corporation commented, "these recommendations are not well explained or use non-standard terminology... the gold standard for medical studies (the prospective randomized clinical trial) is not adequately addressed." The American Heart Association, AstraZeneca and the 510(k) Coalition of medical device companies, among several other groups, expressed similar confusion. Furthermore, the latest draft guidance published by the FDA, in September 2023, does not clearly distinguish between different types of clinical validation studies in recommendations to manufacturers. Retrospective studies typically use existing data that were collected for purposes other than validating a device. Such data may not represent the intended population and can be subject to corruption, deletion and degradation over time. Prospective data are most representative of how the deployment of a medical AI device would affect patient care and thus provide stronger evidence for clinical validation. Randomized controlled trials (RCTs), a type of prospective study, use random assignment to control for confounding variables, thus isolating the therapeutic effect of the device. Given the differing quality of scientific evidence generated by retrospective studies versus prospective studies, including RCTs, such distinctions should be made.
Share
Share
Copy Link
A recent study reveals that nearly half of FDA-approved medical AI devices lack proper clinical validation data, raising concerns about their real-world performance and potential risks to patient care.
A groundbreaking study published in Nature Medicine has uncovered significant concerns regarding the clinical validation of artificial intelligence (AI) medical devices approved by the U.S. Food and Drug Administration (FDA). The research, conducted by a team from Stanford University, reveals that almost half of these FDA-approved AI devices lack crucial clinical validation data 1.
The study examined 161 AI-enabled medical devices that received FDA approval between 2015 and 2022. Researchers meticulously analyzed the publicly available information for these devices, focusing on their intended use, the data used for their development and testing, and the methods employed to evaluate their performance 2.
The results of the study are alarming:
These findings raise serious questions about the real-world performance and safety of these AI medical devices 3.
The lack of comprehensive clinical validation data poses potential risks to patient care. Without proper testing in diverse clinical settings, there's uncertainty about how these AI devices will perform across different patient populations and healthcare environments. This gap in validation could lead to inaccurate diagnoses, inappropriate treatments, or missed critical conditions.
The study highlights the need for more stringent FDA regulations and oversight in the approval process for AI medical devices. While the FDA has been working on developing a regulatory framework for AI/ML-based software as a medical device (SaMD), this research underscores the urgency of implementing more robust validation requirements 1.
Experts are calling for increased transparency in the AI device approval process. They emphasize the need for:
These measures would help ensure that AI medical devices are safe, effective, and reliable across diverse patient populations 2.
As AI continues to play an increasingly significant role in healthcare, addressing these validation gaps becomes crucial. The medical community, regulatory bodies, and AI developers must collaborate to establish more rigorous standards for clinical validation. This collaboration is essential to harness the full potential of AI in medicine while ensuring patient safety and maintaining public trust in these innovative technologies 3.
Reference
[1]
[2]
Medical Xpress - Medical and Health News
|Almost half of FDA-approved AI medical devices are not trained on real patient data, research revealsAI systems in healthcare, while promising, require significant human resources for implementation and maintenance. This challenges the notion that AI will reduce costs and improve efficiency in medical settings.
5 Sources
5 Sources
An exploration of the challenges and opportunities in integrating AI into healthcare, focusing on building trust among medical professionals and ensuring patient safety through proper regulation and data integrity.
2 Sources
2 Sources
Recent studies highlight the potential of artificial intelligence in medical settings, demonstrating improved diagnostic accuracy and decision-making. However, researchers caution about the need for careful implementation and human oversight.
2 Sources
2 Sources
A comprehensive review explores the potential of AI to transform healthcare, highlighting its benefits in diagnostics, personalized medicine, and cost reduction, while addressing challenges in implementation and ethics.
2 Sources
2 Sources
Researchers at Flinders University have developed PROLIFERATE_AI, a human-centered evaluation tool to assess the effectiveness and usability of AI systems in healthcare settings. The tool was used to evaluate RAPIDx AI, a cardiac diagnostic aid, in South Australian hospitals.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved