2 Sources
[1]
Addressing inaccurate race and ethnicity data in medical AI
University of MinnesotaJun 9 2025 The inaccuracy of race and ethnicity data found in electronic health records (EHRs) can negatively impact patient care as artificial intelligence (AI) is increasingly integrated into healthcare. Because hospitals and providers inconsistently collect such data and struggle to accurately classify individual patients, AI systems trained on these datasets can inherit and perpetuate racial bias. In a new publication in PLOS Digital Health, experts in bioethics and law call for immediate standardization of methods for collection of race and ethnicity data, and for developers to warranty race and ethnicity data quality in medical AI systems. The research synthesizes concerns about why patient race data in EHRs may not be accurate, identifies best practices for healthcare systems and medical AI researchers to improve data accuracy, and provides a new template for medical AI developers to transparently warrant the quality of their race and ethnicity data Lead author Alexandra Tsalidis, MBE, notes that "If AI developers heed our recommendation to disclose how their race and ethnicity data were collected, they will not only advance transparency in medical AI but also help patients and regulators critically assess the safety of the resulting medical devices. Just as nutrition labels inform consumers about what they're putting into their bodies, these disclaimers can reveal the quality and origins of the data used to train AI-based health care tools." Race bias in AI models is a huge concern as the technology is increasingly integrated into healthcare. This article provides a concrete method that can be implemented to help address these concerns." Francis Shen, JD, PhD, senior author While more work needs to be done, the article offers a starting point suggests co-author Lakshmi Bharadwaj, MBE. "An open dialogue regarding best practices is a vital step, and the approaches we suggest could generate significant improvements." The research was supported by the NIH Bridge to Artificial Intelligence (Bridge2AI) program, and by an NIH BRAIN Neuroethics grant (R01MH134144). University of Minnesota Journal reference: Tsalidis, A., Bharadwaj, L., & Shen, F. X. (2025). Standardization and accuracy of race and ethnicity data: Equity implications for medical AI. PLOS Digital Health. doi.org/10.1371/journal.pdig.0000807.
[2]
Medical AI systems are failing to disclose inaccurate race and ethnicity information, researchers say
The inaccuracy of race and ethnicity data found in electronic health records (EHRs) can negatively impact patient care as artificial intelligence (AI) is increasingly integrated into health care. Because hospitals and providers inconsistently collect such data and struggle to accurately classify individual patients, AI systems trained on these datasets can inherit and perpetuate racial bias. In a new publication in PLOS Digital Health, experts in bioethics and law call for immediate standardization of methods for the collection of race and ethnicity data, and for developers to warranty race and ethnicity data quality in medical AI systems. The research synthesizes concerns about why patient race data in EHRs may not be accurate, identifies best practices for health care systems and medical AI researchers to improve data accuracy, and provides a new template for medical AI developers to transparently warranty the quality of their race and ethnicity data. Lead author Alexandra Tsalidis, MBE, notes, "If AI developers heed our recommendation to disclose how their race and ethnicity data were collected, they will not only advance transparency in medical AI but also help patients and regulators critically assess the safety of the resulting medical devices. Just as nutrition labels inform consumers about what they're putting into their bodies, these disclaimers can reveal the quality and origins of the data used to train AI-based health care tools." "Race bias in AI models is a huge concern as the technology is increasingly integrated into health care," senior author Francis Shen, JD, Ph.D., says. "This article provides a concrete method that can be implemented to help address these concerns." While more work needs to be done, the article offers a starting point, suggests co-author Lakshmi Bharadwaj, MBE. "An open dialog regarding best practices is a vital step, and the approaches we suggest could generate significant improvements."
Share
Copy Link
A new study highlights the need for standardizing race and ethnicity data collection in electronic health records to prevent bias in medical AI systems, proposing a warranty system for data quality.
A new study published in PLOS Digital Health has shed light on a critical issue in the rapidly evolving field of medical artificial intelligence (AI). Researchers have identified that inaccurate race and ethnicity data in electronic health records (EHRs) could significantly impact patient care as AI becomes more integrated into healthcare systems 1.
The problem stems from inconsistent data collection methods and difficulties in accurately classifying individual patients' race and ethnicity. As a result, AI systems trained on these datasets risk inheriting and perpetuating racial bias, potentially compromising the quality and equity of healthcare delivery.
To address this pressing concern, the research team, led by Alexandra Tsalidis, MBE, has called for immediate action on two fronts:
Standardization of data collection: The study emphasizes the need for healthcare systems to adopt standardized methods for collecting race and ethnicity data 2.
Data quality warranty: The researchers propose that AI developers should provide warranties for the quality of race and ethnicity data used in their medical AI systems.
These recommendations aim to improve data accuracy and transparency in medical AI development. The research team has also developed a new template for AI developers to transparently warrant the quality of their race and ethnicity data.
Lead author Alexandra Tsalidis draws an analogy between the proposed data quality disclosures and nutrition labels on food products. She states, "Just as nutrition labels inform consumers about what they're putting into their bodies, these disclaimers can reveal the quality and origins of the data used to train AI-based health care tools" 1.
Source: Medical Xpress
This approach is expected to not only advance transparency in medical AI but also empower patients and regulators to critically assess the safety of AI-based medical devices.
Senior author Francis Shen, JD, PhD, emphasizes the significance of this research, stating, "Race bias in AI models is a huge concern as the technology is increasingly integrated into healthcare. This article provides a concrete method that can be implemented to help address these concerns" 2.
The study offers a starting point for tackling this complex issue. Co-author Lakshmi Bharadwaj, MBE, suggests that an open dialogue regarding best practices is crucial, and the proposed approaches could lead to significant improvements in the field.
This important research was supported by the NIH Bridge to Artificial Intelligence (Bridge2AI) program and an NIH BRAIN Neuroethics grant (R01MH134144) 1. While the study provides a solid foundation for addressing racial bias in medical AI, the authors acknowledge that more work needs to be done in this area.
As AI continues to play an increasingly significant role in healthcare, ensuring the accuracy and fairness of these systems becomes paramount. The standardization of race and ethnicity data collection and the implementation of data quality warranties represent crucial steps towards more equitable and reliable AI-driven healthcare solutions.
Google introduces Search Live, an AI-powered feature enabling back-and-forth voice conversations with its search engine, enhancing user interaction and information retrieval.
15 Sources
Technology
1 day ago
15 Sources
Technology
1 day ago
Microsoft is set to cut thousands of jobs, primarily in sales, as it shifts focus towards AI investments. The tech giant plans to invest $80 billion in AI infrastructure while restructuring its workforce.
13 Sources
Business and Economy
1 day ago
13 Sources
Business and Economy
1 day ago
Apple's senior VP of Hardware Technologies, Johny Srouji, reveals the company's interest in using generative AI to accelerate chip design processes, potentially revolutionizing their approach to custom silicon development.
11 Sources
Technology
16 hrs ago
11 Sources
Technology
16 hrs ago
Midjourney, known for AI image generation, has released its first AI video model, V1, allowing users to create short videos from images. This launch puts Midjourney in competition with other AI video generation tools and raises questions about copyright and pricing.
10 Sources
Technology
1 day ago
10 Sources
Technology
1 day ago
A new study reveals that AI reasoning models produce significantly higher CO₂ emissions compared to concise models when answering questions, highlighting the environmental impact of advanced AI technologies.
8 Sources
Technology
8 hrs ago
8 Sources
Technology
8 hrs ago