AI Mirrors Human Biases: ChatGPT Exhibits Similar Decision-Making Flaws, Study Reveals

3 Sources

Share

A new study finds that ChatGPT, while excelling at logic and math, displays many of the same cognitive biases as humans when making subjective decisions, raising concerns about AI's reliability in high-stakes decision-making processes.

News article

AI Exhibits Human-Like Biases in Decision-Making

A groundbreaking study published in the INFORMS journal Manufacturing & Service Operations Management has revealed that ChatGPT, one of the most advanced AI models, exhibits many of the same cognitive biases as humans when making subjective decisions

1

. This finding challenges the notion that AI can consistently make more objective and unbiased decisions than humans.

Study Methodology and Key Findings

Researchers conducted 18 different bias tests on ChatGPT, examining its decision-making processes across various scenarios. The results showed that in nearly half of the tests, ChatGPT made mistakes similar to those made by humans

2

.

Key biases observed in ChatGPT include:

  1. Overconfidence
  2. Risk aversion
  3. Gambler's fallacy
  4. Ambiguity aversion
  5. Confirmation bias

Interestingly, ChatGPT avoided some typical human errors, such as base-rate neglect and sunk cost fallacies

1

.

Evolution of AI Models and Bias

The study also compared different versions of ChatGPT, revealing an intriguing trend. Newer versions, such as GPT-4, showed improved accuracy in mathematical and logical problems but displayed stronger biases in scenarios requiring subjective judgment

3

.

Implications for AI in Decision-Making Processes

These findings raise significant concerns about relying on AI for high-stakes decisions in various fields, including:

  1. Job hiring
  2. Loan approvals
  3. Insurance rates
  4. Government policy-making

Yang Chen, lead author and assistant professor at Western University, warns, "As AI learns from human data, it may also think like a human - biases and all"

2

.

The Need for Oversight and Ethical Guidelines

The researchers emphasize the importance of treating AI like an employee who makes important decisions, requiring oversight and ethical guidelines. Samuel Kirshner of UNSW Business School cautions, "If left unchecked, it might not fix decision-making problems - it could actually make them worse"

3

.

Future Directions and Recommendations

To address these concerns, the study's authors recommend:

  1. Regular audits of AI-driven decisions
  2. Refining AI systems to reduce biases
  3. Evaluating different AI models for specific decision-making use cases
  4. Continuous re-evaluation to avoid unexpected outcomes

Tracy Jenkin of Queen's University notes, "Managers must evaluate how different models perform on their decision-making use cases and regularly re-evaluate to avoid surprises. Some use cases will need significant model refinement"

1

.

As AI continues to play an increasingly significant role in decision-making processes across various sectors, understanding and mitigating these biases will be crucial for ensuring that AI truly improves decision-making rather than simply replicating human flaws at scale.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo