AI systems judge people with systematic biases stronger than humans, study reveals

Reviewed byNidhi Govil

2 Sources

Share

A Hebrew University study analyzing over 43,000 simulated decisions reveals AI systems exhibit systematic and predictable biases when evaluating people. While AI mimics human trust, it relies on rigid criteria rather than holistic impressions, leading to stronger biases in critical decision-making roles like job screening and creditworthiness assessment.

AI Systems Show Systematic Biases in Evaluating People

AI bias has emerged as a critical concern as machines take on more critical decision-making roles in screening job candidates, assessing creditworthiness, and guiding organizational choices. A groundbreaking study from Hebrew University, published in Proceedings of the Royal Society, analyzed over 43,000 simulated decisions alongside approximately 1,000 human participants across five scenarios

1

2

. The research reveals that AI systems exhibit systematic and predictable biases when judging people, operating fundamentally differently from human judgment despite appearing to mimic trust.

Source: TechRadar

Source: TechRadar

How AI Trust Differs From Human Evaluation

The scenarios tested included financial decisions such as lending money to a small business owner and donating to a nonprofit founder, alongside social judgments like assessing a babysitter or rating a boss

1

. Both humans and AI systems favored individuals perceived as demonstrating competence, integrity, and good intentions. "AI is not making random decisions. It captures something real about how humans evaluate one another," said Prof. Yaniv Dover

1

. However, the critical difference lies in AI's evaluation processes. While humans form holistic impressions by blending multiple traits into intuitive judgments, AI systems break people down into separate components, scoring traits like competence and integrity almost like separate columns in a spreadsheet

2

.

Rigid Criteria Lead to Stronger Biases

"People in our study are messy and holistic in how they judge others," explained Valeria Lerman. "AI is cleaner, more systematic, and that can lead to very different outcomes"

1

. The research uncovered that AI's biases can be stronger than human biases, appearing even when every other detail about the person remained identical. In financial scenarios, AI systems displayed consistent differences based solely on demographic traits. Older individuals frequently received more favorable outcomes, while religion had strong effects, especially in monetary decisions, and gender also influenced judgments in certain models

1

. "Humans have biases, of course," said Prof. Dover. "But what surprised us is that AI's biases can be more systematic, more predictable, and sometimes stronger"

2

.

Model Selection Matters in Decision-Making

Another significant finding reveals there is no single "AI opinion." Different language models often made different judgments about the same person, meaning the choice of AI system could quietly shape real-world outcomes

1

. "Which model you use really matters," Lerman noted

2

. This variability becomes particularly concerning as large language models increasingly handle job candidate screening, creditworthiness assessments, medical recommendations, and organizational decisions. While AI can mimic the structure of human reasoning in a consistent way, it does so with rigid criteria and less nuanced evaluation patterns, making biases harder to detect

2

.

Understanding How Machines Trust Us

"These systems are powerful," said Dover. "They can model aspects of human reasoning in a consistent way. But they are not human, and we should not assume they see people the way we do"

1

. The researchers emphasize their findings serve not as a warning against AI, but as a call for awareness as these tools evolve from assistants to decision-makers. The question facing organizations deploying AI at scale is no longer whether we trust machines, but whether we understand how they trust us

2

. As AI systems move into more critical decision-making roles, understanding their structured approach to evaluating people becomes essential for detecting and mitigating systematic biases that may be stronger and more predictable than those exhibited by humans.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo