The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 19 Dec, 8:02 AM UTC
3 Sources
[1]
International experts outline recommendations for reducing bias risk in AI health tech
A new set of internationally-agreed recommendations might help patients benefit better from AI-based medical innovations, such as by minimising the risk of bias, according to researchers. Studies have shown that medical innovations, based on artificial intelligence (AI) technologies, can be biased -- they work well for some people and not for others, suggesting that some individuals and communities may be "left behind", or may even be harmed. The recommendations, published in The Lancet Digital Health journal and New England Journal of Medicine AI, are aimed at improving how datasets -- used to build AI health technologies -- can reduce the risk of potential AI bias. "Data is like a mirror, providing a reflection of reality. And when distorted, data can magnify societal biases. But trying to fix the data to fix the problem is like wiping the mirror to remove a stain on your shirt," lead author Xiaoxuan Liu, an associate professor of AI and Digital Health Technologies at the University of Birmingham, UK, said. "To create lasting change in health equity, we must focus on fixing the source, not just the reflection," Liu said. Key recommendations include preparing summaries of dataset and presenting them in plain language, researchers forming the international initiative 'STANDING Together (STANdards for data Diversity, INclusivity and Generalisability)' involving more than 350 experts from 58 countries. Known or expected sources of bias, error, or other factors that affect the dataset should also be identified, the authors said. Further, the performance of an AI health technology should be evaluated and compared between contextualised groups of interest, along with the overall study population. Uncertainties identified in AI performance should be managed through mitigation plans, ensuring the clinical implications of these findings are clearly stated, along with documenting strategies to monitor, manage and reduce these risks while implementing the technology, the authors said. "We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation," they said. "We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies that are safe and effective," they said.
[2]
New recommendations to increase transparency and tackle potential bias in medical AI technologies
Patients will be better able to benefit from innovations in medical artificial intelligence (AI) if a new set of internationally-agreed recommendations are followed. A new set of recommendations published in The Lancet Digital Health and NEJM AI aims to help improve the way datasets are used to build Artificial intelligence (AI) health technologies and reduce the risk of potential AI bias. Innovative medical AI technologies may improve diagnosis and treatment for patients. However, some studies have shown that medical AI can be biased, meaning that it works well for some people and not for others. This means some individuals and communities may be "left behind," or may even be harmed when these technologies are used. An international initiative called "STANDING Together (STANdards for data Diversity, INclusivity and Generalizability)" has published recommendations as part of a research study involving more than 350 experts from 58 countries. These recommendations aim to ensure that medical AI can be safe and effective for everyone. They cover many factors which can contribute to AI bias, including: Dr. Xiao Liu, Associate Professor of AI and Digital Health Technologies at the University of Birmingham and Chief Investigator of the study said, "Data is like a mirror, providing a reflection of reality. And when distorted, data can magnify societal biases. But trying to fix the data to fix the problem is like wiping the mirror to remove a stain on your shirt. "To create lasting change in health equity, we must focus on fixing the source, not just the reflection." The STANDING Together recommendations aim to ensure that the datasets used to train and test medical AI systems represent the full diversity of the people that the technology will be used for. This is because AI systems often work less well for people who aren't properly represented in datasets. People who are in minority groups are particularly likely to be under-represented in datasets, so may be disproportionately affected by AI bias. Guidance is also given on how to identify those who may be harmed when medical AI systems are used, allowing this risk to be reduced. STANDING Together is led by researchers at University Hospitals Birmingham NHS Foundation Trust, and the University of Birmingham, UK. The research has been conducted with collaborators from over 30 institutions worldwide, including universities, regulators (UK, US, Canada and Australia), patient groups and charities, and small and large health technology companies. In addition to the recommendations themselves, a commentary published in Nature Medicine written by the STANDING Together patient representatives highlights the importance of public participation in shaping medical AI research. Sir Jeremy Farrar, Chief Scientist of the World Health Organization said, "Ensuring we have diverse, accessible and representative datasets to support the responsible development and testing of AI is a global priority. The STANDING Together recommendations are a major step forward in ensuring equity for AI in health." Dominic Cushnan, Deputy Director for AI at NHS England said, "It is crucial that we have transparent and representative datasets to support the responsible and fair development and use of AI. The STANDING Together recommendations are highly timely as we leverage the exciting potential of AI tools and NHS AI Lab fully supports the adoption of their practice to mitigate AI bias." These recommendations may be particularly helpful for regulatory agencies, health and care policy organizations, funding bodies, ethical review committees, universities, and government departments.
[3]
New recommendations to increase transparency and tackle potential bias in medical AI technologies
Patients will be better able to benefit from innovations in medical artificial intelligence (AI) if a new set of internationally-agreed recommendations are followed. A new set of recommendations published in The Lancet Digital Health and NEJM AI aims to help improve the way datasets are used to build Artificial intelligence (AI) health technologies and reduce the risk of potential AI bias. Innovative medical AI technologies may improve diagnosis and treatment for patients, however some studies have shown that medical AI can be biased, meaning that it works well for some people and not for others. This means some individuals and communities may be 'left behind', or may even be harmed when these technologies are used. An international initiative called 'STANDING Together (STANdards for data Diversity, INclusivity and Generalisability)' has published recommendations as part of a research study involving more than 350 experts from 58 countries. These recommendations aim to ensure that medical AI can be safe and effective for everyone. They cover many factors which can contribute to AI bias, including: Dr Xiao Liu, Associate Professor of AI and Digital Health Technologies at the University of Birmingham and Chief Investigator of the study said: "Data is like a mirror, providing a reflection of reality. And when distorted, data can magnify societal biases. But trying to fix the data to fix the problem is like wiping the mirror to remove a stain on your shirt. "To create lasting change in health equity, we must focus on fixing the source, not just the reflection." The STANDING Together recommendations aim to ensure that the datasets used to train and test medical AI systems represent the full diversity of the people that the technology will be used for. This is because AI systems often work less well for people who aren't properly represented in datasets. People who are in minority groups are particularly likely to be under-represented in datasets, so may be disproportionately affected by AI bias. Guidance is also given on how to identify those who may be harmed when medical AI systems are used, allowing this risk to be reduced. STANDING Together is led by researchers at University Hospitals Birmingham NHS Foundation Trust, and the University of Birmingham, UK. The research has been conducted with collaborators from over 30 institutions worldwide, including universities, regulators (UK, US, Canada and Australia), patient groups and charities, and small and large health technology companies. The work has been funded by The Health Foundation and the NHS AI Lab, and supported by the National Institute for Health and Care Research (NIHR), the research partner of the NHS, public health and social care. In addition to the recommendations themselves, a commentary published in Nature Medicine written by the STANDING Together patient representatives highlights the importance of public participation in shaping medical AI research. Sir Jeremy Farrar, Chief Scientist of the World Health Organisation said: "Ensuring we have diverse, accessible and representative datasets to support the responsible development and testing of AI is a global priority. The STANDING Together recommendations are a major step forward in ensuring equity for AI in health." Dominic Cushnan, Deputy Director for AI at NHS England said: "It is crucial that we have transparent and representative datasets to support the responsible and fair development and use of AI. The STANDING Together recommendations are highly timely as we leverage the exciting potential of AI tools and NHS AI Lab fully supports the adoption of their practice to mitigate AI bias.'' The recommendations have been published today (18th December 2024), and are available open access via The Lancet Digital Health. These recommendations may be particularly helpful for regulatory agencies, health and care policy organisations, funding bodies, ethical review committees, universities, and government departments.
Share
Share
Copy Link
A global initiative has produced a set of recommendations to address potential bias in AI-based medical technologies, aiming to ensure equitable and effective healthcare for all.
In a significant move to enhance the safety and effectiveness of artificial intelligence (AI) in healthcare, an international group of experts has published a set of recommendations aimed at reducing bias in AI health technologies. The initiative, known as 'STANDING Together (STANdards for data Diversity, INclusivity and Generalisability)', involved over 350 experts from 58 countries and has produced guidelines published in The Lancet Digital Health and NEJM AI 123.
Studies have shown that AI-based medical innovations can exhibit bias, working effectively for some populations while failing others. This discrepancy raises concerns about certain individuals and communities being "left behind" or potentially harmed by these technologies 1. Dr. Xiaoxuan Liu, lead author and Associate Professor at the University of Birmingham, UK, emphasizes the importance of addressing the root causes of health inequity rather than merely attempting to fix biased data 2.
The STANDING Together recommendations focus on several critical areas:
Dataset Transparency: Preparing summaries of datasets in plain language and identifying known or expected sources of bias or error 1.
Performance Evaluation: Assessing AI health technology performance across different contextual groups and the overall study population 1.
Bias Mitigation: Developing plans to manage uncertainties in AI performance and documenting strategies to monitor, manage, and reduce risks during implementation 1.
Diverse Representation: Ensuring that datasets used for training and testing AI systems represent the full diversity of potential users 2.
Risk Identification: Providing guidance on identifying individuals who may be at risk of harm from medical AI systems 2.
The recommendations have garnered support from key figures in global health and technology:
Sir Jeremy Farrar, Chief Scientist of the World Health Organization, hailed the recommendations as "a major step forward in ensuring equity for AI in health" 23.
Dominic Cushnan, Deputy Director for AI at NHS England, emphasized the importance of transparent and representative datasets in AI development 23.
These recommendations are expected to be particularly valuable for regulatory agencies, health policy organizations, funding bodies, ethical review committees, universities, and government departments 23. By adopting these guidelines, stakeholders across the AI health technology lifecycle can work towards ensuring that everyone in society benefits from technologies that are both safe and effective 1.
A commentary in Nature Medicine, authored by STANDING Together patient representatives, underscores the importance of public participation in shaping medical AI research, further emphasizing the initiative's commitment to inclusive and equitable healthcare innovation 23.
As AI continues to play an increasingly significant role in healthcare, these recommendations represent a crucial step towards harnessing its potential while safeguarding against unintended biases and ensuring equitable access to its benefits for all patients.
Reference
[1]
[2]
Medical Xpress - Medical and Health News
|New recommendations to increase transparency and tackle potential bias in medical AI technologiesA study introduces the EDAI framework, designed to integrate equity, diversity, and inclusion principles throughout the AI lifecycle in healthcare, addressing gaps in current AI development practices.
3 Sources
3 Sources
Researchers from UTHealth Houston and Baylor College of Medicine have published new guidance in JAMA for safely implementing and using AI in healthcare settings, emphasizing the need for robust governance and testing processes.
3 Sources
3 Sources
A new study by UC Santa Cruz and University of British Columbia researchers highlights the potential of AI in healthcare while warning about its limitations in addressing fundamental public health issues.
4 Sources
4 Sources
MIT researchers have created a novel method to identify and remove specific data points in AI training datasets that contribute to bias, improving model performance for underrepresented groups while preserving overall accuracy.
3 Sources
3 Sources
A recent study reveals that nearly half of FDA-approved medical AI devices lack proper clinical validation data, raising concerns about their real-world performance and potential risks to patient care.
3 Sources
3 Sources