2 Sources
[1]
Addressing inaccurate race and ethnicity data in medical AI
University of MinnesotaJun 9 2025 The inaccuracy of race and ethnicity data found in electronic health records (EHRs) can negatively impact patient care as artificial intelligence (AI) is increasingly integrated into healthcare. Because hospitals and providers inconsistently collect such data and struggle to accurately classify individual patients, AI systems trained on these datasets can inherit and perpetuate racial bias. In a new publication in PLOS Digital Health, experts in bioethics and law call for immediate standardization of methods for collection of race and ethnicity data, and for developers to warranty race and ethnicity data quality in medical AI systems. The research synthesizes concerns about why patient race data in EHRs may not be accurate, identifies best practices for healthcare systems and medical AI researchers to improve data accuracy, and provides a new template for medical AI developers to transparently warrant the quality of their race and ethnicity data Lead author Alexandra Tsalidis, MBE, notes that "If AI developers heed our recommendation to disclose how their race and ethnicity data were collected, they will not only advance transparency in medical AI but also help patients and regulators critically assess the safety of the resulting medical devices. Just as nutrition labels inform consumers about what they're putting into their bodies, these disclaimers can reveal the quality and origins of the data used to train AI-based health care tools." Race bias in AI models is a huge concern as the technology is increasingly integrated into healthcare. This article provides a concrete method that can be implemented to help address these concerns." Francis Shen, JD, PhD, senior author While more work needs to be done, the article offers a starting point suggests co-author Lakshmi Bharadwaj, MBE. "An open dialogue regarding best practices is a vital step, and the approaches we suggest could generate significant improvements." The research was supported by the NIH Bridge to Artificial Intelligence (Bridge2AI) program, and by an NIH BRAIN Neuroethics grant (R01MH134144). University of Minnesota Journal reference: Tsalidis, A., Bharadwaj, L., & Shen, F. X. (2025). Standardization and accuracy of race and ethnicity data: Equity implications for medical AI. PLOS Digital Health. doi.org/10.1371/journal.pdig.0000807.
[2]
Medical AI systems are failing to disclose inaccurate race and ethnicity information, researchers say
The inaccuracy of race and ethnicity data found in electronic health records (EHRs) can negatively impact patient care as artificial intelligence (AI) is increasingly integrated into health care. Because hospitals and providers inconsistently collect such data and struggle to accurately classify individual patients, AI systems trained on these datasets can inherit and perpetuate racial bias. In a new publication in PLOS Digital Health, experts in bioethics and law call for immediate standardization of methods for the collection of race and ethnicity data, and for developers to warranty race and ethnicity data quality in medical AI systems. The research synthesizes concerns about why patient race data in EHRs may not be accurate, identifies best practices for health care systems and medical AI researchers to improve data accuracy, and provides a new template for medical AI developers to transparently warranty the quality of their race and ethnicity data. Lead author Alexandra Tsalidis, MBE, notes, "If AI developers heed our recommendation to disclose how their race and ethnicity data were collected, they will not only advance transparency in medical AI but also help patients and regulators critically assess the safety of the resulting medical devices. Just as nutrition labels inform consumers about what they're putting into their bodies, these disclaimers can reveal the quality and origins of the data used to train AI-based health care tools." "Race bias in AI models is a huge concern as the technology is increasingly integrated into health care," senior author Francis Shen, JD, Ph.D., says. "This article provides a concrete method that can be implemented to help address these concerns." While more work needs to be done, the article offers a starting point, suggests co-author Lakshmi Bharadwaj, MBE. "An open dialog regarding best practices is a vital step, and the approaches we suggest could generate significant improvements."
Share
Copy Link
A new study highlights the need for standardizing race and ethnicity data collection in electronic health records to prevent bias in medical AI systems, proposing a warranty system for data quality.
A new study published in PLOS Digital Health has shed light on a critical issue in the rapidly evolving field of medical artificial intelligence (AI). Researchers have identified that inaccurate race and ethnicity data in electronic health records (EHRs) could significantly impact patient care as AI becomes more integrated into healthcare systems 1.
The problem stems from inconsistent data collection methods and difficulties in accurately classifying individual patients' race and ethnicity. As a result, AI systems trained on these datasets risk inheriting and perpetuating racial bias, potentially compromising the quality and equity of healthcare delivery.
To address this pressing concern, the research team, led by Alexandra Tsalidis, MBE, has called for immediate action on two fronts:
Standardization of data collection: The study emphasizes the need for healthcare systems to adopt standardized methods for collecting race and ethnicity data 2.
Data quality warranty: The researchers propose that AI developers should provide warranties for the quality of race and ethnicity data used in their medical AI systems.
These recommendations aim to improve data accuracy and transparency in medical AI development. The research team has also developed a new template for AI developers to transparently warrant the quality of their race and ethnicity data.
Lead author Alexandra Tsalidis draws an analogy between the proposed data quality disclosures and nutrition labels on food products. She states, "Just as nutrition labels inform consumers about what they're putting into their bodies, these disclaimers can reveal the quality and origins of the data used to train AI-based health care tools" 1.
Source: Medical Xpress
This approach is expected to not only advance transparency in medical AI but also empower patients and regulators to critically assess the safety of AI-based medical devices.
Senior author Francis Shen, JD, PhD, emphasizes the significance of this research, stating, "Race bias in AI models is a huge concern as the technology is increasingly integrated into healthcare. This article provides a concrete method that can be implemented to help address these concerns" 2.
The study offers a starting point for tackling this complex issue. Co-author Lakshmi Bharadwaj, MBE, suggests that an open dialogue regarding best practices is crucial, and the proposed approaches could lead to significant improvements in the field.
This important research was supported by the NIH Bridge to Artificial Intelligence (Bridge2AI) program and an NIH BRAIN Neuroethics grant (R01MH134144) 1. While the study provides a solid foundation for addressing racial bias in medical AI, the authors acknowledge that more work needs to be done in this area.
As AI continues to play an increasingly significant role in healthcare, ensuring the accuracy and fairness of these systems becomes paramount. The standardization of race and ethnicity data collection and the implementation of data quality warranties represent crucial steps towards more equitable and reliable AI-driven healthcare solutions.
Disney and NBCUniversal have filed a landmark lawsuit against AI image-synthesis company Midjourney, accusing it of copyright infringement for allowing users to create images of copyrighted characters like Darth Vader and Shrek.
47 Sources
Technology
12 hrs ago
47 Sources
Technology
12 hrs ago
Nvidia CEO Jensen Huang announces major AI infrastructure investments across Europe, including partnerships with Mistral AI and plans for multiple data centers, positioning the company at the forefront of Europe's AI development.
11 Sources
Technology
20 hrs ago
11 Sources
Technology
20 hrs ago
Google creates a new executive position, Chief AI Architect, appointing Koray Kavukcuoglu to lead AI-powered product development and integration across the company.
4 Sources
Technology
12 hrs ago
4 Sources
Technology
12 hrs ago
NVIDIA announces the construction of the world's first industrial AI cloud in Germany, featuring 10,000 GPUs to boost European manufacturing capabilities and AI adoption across various industries.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago
Meta unveils V-JEPA 2, an advanced AI model designed to help AI agents and robots understand and predict physical world interactions, potentially revolutionizing fields like robotics and autonomous vehicles.
7 Sources
Technology
12 hrs ago
7 Sources
Technology
12 hrs ago