AI Summaries Show Gender Bias in Medical Care, UK Study Reveals

2 Sources

Share

A study by the London School of Economics finds that AI tools used in social care can downplay health issues for female patients, potentially leading to inadequate medical care.

AI Tools Reveal Gender Bias in Medical Summaries

A recent study conducted by the London School of Economics and Political Science (LSE) has uncovered a concerning trend in artificial intelligence (AI) tools used for summarizing medical case notes. The research found that these AI systems, particularly Google's Gemma model, tend to downplay health issues for female patients, potentially leading to inadequate medical care

1

.

Source: engadget

Source: engadget

Study Methodology and Findings

The LSE research team, led by Dr. Sam Rickman, analyzed real case notes from 617 adult social care users in the UK. These notes were processed through different large language models (LLMs), including Meta's Llama 3 and Google's Gemma, with only the patient's gender swapped. The study examined 29,616 pairs of summaries to identify how male and female cases were treated differently by the AI models

2

.

Significant Gender Disparities

The research revealed that when using Google's Gemma model, language such as "disabled," "unable," and "complex" appeared significantly more often in descriptions of men than women. For instance, the same case notes summarized for a male patient as "Mr Smith is an 84-year-old man who lives alone and has a complex medical history, no care package and poor mobility" were described for a female patient as "Mrs Smith is an 84-year-old living alone. Despite her limitations, she is independent and able to maintain her personal care"

1

2

.

Implications for Healthcare

Dr. Rickman expressed concern about these findings, stating, "Because the amount of care you get is determined on the basis of perceived need, this could result in women receiving less care if biased models are used in practice"

1

. This bias in AI summaries could lead to unequal care provision for women, as their health needs may be underestimated or overlooked

2

.

Widespread Use and Lack of Transparency

The study highlights that AI tools are being used by more than half of England's councils to ease the workload of overstretched social workers. However, there is little information about which specific AI models are being used, how frequently, and what impact this has on decision-making

2

.

Call for Regulation and Transparency

Researchers emphasize the need for transparency and rigorous testing of AI systems used in healthcare. Dr. Rickman stated, "While my research highlights issues with one model, more are being deployed all the time, making it essential that all AI systems are transparent, rigorously tested for bias and subject to robust legal oversight"

2

.

Broader Context of AI Bias

This study adds to the growing body of evidence showing biases in AI systems across various industries. A US study analyzing 133 AI systems found that about 44% showed gender bias and 25% exhibited both gender and racial bias

2

. These findings underscore the importance of addressing biases in AI, particularly in critical sectors like healthcare.

Future Directions

The paper concludes by recommending that regulators "should mandate the measurement of bias in LLMs used in long-term care" to prioritize "algorithmic fairness"

2

. As AI continues to play an increasingly significant role in healthcare and social services, ensuring unbiased and equitable treatment for all patients remains a crucial challenge for developers, policymakers, and healthcare providers alike.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo