2 Sources
[1]
Clinicians more likely to express doubt in medical records of Black patients
PLOSAug 13 2025 Clinicians are more likely to indicate doubt or disbelief in the medical records of Black patients than in those of White patients-a pattern that could contribute to ongoing racial disparities in healthcare. That is the conclusion of a new study, analyzing more than 13 million clinical notes, publishing August 13, 2025 in the open-access journal PLOS One by Mary Catherine Beach of Johns Hopkins University, U.S. There is mounting evidence that electronic health records (EHR) contain language reflecting the unconscious biases of clinicians, and that this language may undermine the quality of care that patients receive. In the new study, researchers analyzed 13,065,081 EHR notes written between 2016 and 2023 about 1,537,587 patients by 12,027 clinicians at a large health system in the mid-Atlantic United States. They used artificial intelligence (AI) tools to find which notes had language suggesting the clinician doubted the sincerity or narrative competence of the patient-for example stating that the patient "claims," "insists," or is "adamant about" their symptoms, or is a "poor historian." Overall, fewer than 1% (n=106,523; 0.82%) of the medical notes contained language undermining patient credibility - about half of which undermined sincerity (n=62,480; 0.48%) and half undermined competence (n=52,243; 0.40%). However, notes written about non-Hispanic Black patients, compared to those written about White patients, had higher odds of containing terms undermining the patients' credibility (aOR 1.29, 95% CI 1.27-1.32), sincerity (aOR 1.16; 95% CI 1.14-1.19) or competence (aOR 1.50; 95% 1.47-1.54). Moreover, notes written about Black patients were less likely to have language supporting credibility (aOR 0.82; 95% CI 0.79-0.85) than those written about White or Asian patients. The study was limited by the fact that it used only one health system and did not examine the influence of clinician characteristics such as race, age or gender. Additionally, as the utilized NLP models had high, but not perfect, accuracy in detecting credibility-related language, they may have misclassified some notes and thereby under- or overestimated the prevalence of credibility-related language. Still, the authors conclude that clinician documentation undermining patient credibility may disproportionately stigmatize Black individuals, and that the findings likely represent "the tip of an iceberg." They say that medical training should help future clinicians become more aware of unconscious biases, and that AI tools used to help write medical notes should be programmed to avoid biased language. The authors add: "For years, many patients - particularly Black patients - have felt their concerns were dismissed by health professionals. By isolating words and phrases suggesting that a patient may not be believed or taken seriously, we hope to raise awareness of this type of credibility bias with the ultimate goal of eliminating it." PLOS Journal reference: Beach, M. C., et al. (2025) Racial bias in clinician assessment of patient credibility: Evidence from electronic health records. PLoS One. doi.org/10.1371/journal.pone.0328134/
[2]
Analysis reveals potential racial bias in how doctors document patient trustworthiness
Clinicians are more likely to indicate doubt or disbelief in the medical records of Black patients than in those of white patients -- a pattern that could contribute to ongoing racial disparities in health care. That is the conclusion of a study, analyzing more than 13 million clinical notes, published in the open-access journal PLOS One by Mary Catherine Beach of Johns Hopkins University, U.S. There is mounting evidence that electronic health records (EHR) contain language reflecting the unconscious biases of clinicians, and that this language may undermine the quality of care that patients receive. In the new study, researchers analyzed 13,065,081 EHR notes written between 2016 and 2023 about 1,537,587 patients by 12,027 clinicians at a large health system in the mid-Atlantic United States. They used artificial intelligence (AI) tools to find which notes had language suggesting the clinician doubted the sincerity or narrative competence of the patient -- for example, stating that the patient "claims," "insists," or is "adamant about" their symptoms, or is a "poor historian." Overall, fewer than 1% (n=106,523; 0.82%) of the medical notes contained language undermining patient credibility -- about half of which undermined sincerity (n=62,480; 0.48%) and half undermined competence (n=52,243; 0.40%). However, notes written about non-Hispanic Black patients, compared to those written about white patients, had higher odds of containing terms undermining the patients' credibility (aOR 1.29, 95% CI 1.27-1.32), sincerity (aOR 1.16; 95% CI 1.14-1.19) or competence (aOR 1.50; 95% CI 1.47-1.54). Moreover, notes written about Black patients were less likely to have language supporting credibility (aOR 0.82; 95% CI 0.79-0.85) than those written about white or Asian patients. The study was limited by the fact that it used only one health system and did not examine the influence of clinician characteristics such as race, age or gender. Additionally, as the utilized NLP models had high, but not perfect, accuracy in detecting credibility-related language, they may have misclassified some notes and thereby under- or overestimated the prevalence of credibility-related language. Still, the authors conclude that clinician documentation undermining patient credibility may disproportionately stigmatize Black individuals, and that the findings likely represent "the tip of an iceberg." They say that medical training should help future clinicians become more aware of unconscious biases, and that AI tools used to help write medical notes should be programmed to avoid biased language. The authors add, "For years, many patients -- particularly Black patients -- have felt their concerns were dismissed by health professionals. By isolating words and phrases suggesting that a patient may not be believed or taken seriously, we hope to raise awareness of this type of credibility bias with the ultimate goal of eliminating it."
Share
Copy Link
A study using AI to analyze over 13 million medical records finds that clinicians are more likely to express doubt about Black patients' credibility compared to White patients, potentially contributing to racial disparities in healthcare.
A groundbreaking study published in the open-access journal PLOS One has uncovered a concerning pattern of racial bias in how clinicians document patient credibility in electronic health records (EHRs). Led by Mary Catherine Beach of Johns Hopkins University, the research analyzed over 13 million clinical notes from a large health system in the mid-Atlantic United States 12.
Source: News-Medical
The study employed artificial intelligence (AI) tools to examine 13,065,081 EHR notes written between 2016 and 2023. These notes pertained to 1,537,587 patients and were authored by 12,027 clinicians. The AI was programmed to identify language that suggested clinicians doubted the sincerity or narrative competence of patients 12.
While less than 1% (0.82%) of the medical notes contained language undermining patient credibility, a clear racial disparity emerged:
Source: Medical Xpress
The researchers suggest that this pattern of documentation could contribute to ongoing racial disparities in healthcare. Dr. Beach and her colleagues emphasize that these findings likely represent "the tip of an iceberg" in terms of unconscious bias in medical practice 12.
The study acknowledges several limitations, including its focus on a single health system and the inability to examine clinician characteristics such as race, age, or gender. Additionally, the AI models used, while highly accurate, may have misclassified some notes 12.
To combat this issue, the researchers propose two key strategies:
Dr. Beach stated, "For years, many patients - particularly Black patients - have felt their concerns were dismissed by health professionals. By isolating words and phrases suggesting that a patient may not be believed or taken seriously, we hope to raise awareness of this type of credibility bias with the ultimate goal of eliminating it" 12.
This study highlights the potential of AI in uncovering and addressing systemic biases in healthcare, paving the way for more equitable and unbiased medical practices in the future.
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
14 hrs ago
7 Sources
Technology
14 hrs ago
Anthropic has updated its Claude Opus 4 and 4.1 AI models with the ability to terminate conversations in extreme cases of persistent harm or abuse, as part of its AI welfare research.
6 Sources
Technology
22 hrs ago
6 Sources
Technology
22 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
OpenAI has made subtle changes to GPT-5's personality, aiming to make it more approachable after users complained about its formal tone. The company is also working on allowing greater customization of ChatGPT's style.
4 Sources
Technology
6 hrs ago
4 Sources
Technology
6 hrs ago
SoftBank has purchased Foxconn's Ohio plant for $375 million to produce AI servers for the Stargate project. Foxconn will continue to operate the facility, which will be retrofitted for AI server production.
5 Sources
Technology
6 hrs ago
5 Sources
Technology
6 hrs ago