Study Reveals GPT Models Struggle with Flexible Reasoning, Highlighting Limitations in AI Cognition

2 Sources

Share

A new study from the University of Amsterdam and Santa Fe Institute shows that while GPT models perform well on standard analogy tasks, they struggle with variations, indicating limitations in AI's reasoning capabilities compared to humans.

News article

GPT Models Struggle with Flexible Reasoning

A groundbreaking study conducted by researchers from the University of Amsterdam and the Santa Fe Institute has shed light on the limitations of artificial intelligence (AI) in replicating human-like reasoning. The research, published in Transactions on Machine Learning Research, focused on comparing the performance of GPT models with human cognition in analogical reasoning tasks

1

2

.

Understanding Analogical Reasoning

Analogical reasoning, a fundamental aspect of human cognition, involves drawing comparisons between different concepts based on shared similarities. This ability is crucial for understanding the world and making decisions. For instance, recognizing that "cup is to coffee as soup is to bowl" demonstrates this type of reasoning

1

2

.

Study Methodology and Findings

The study, led by Martha Lewis from the Institute for Logic, Language and Computation at the University of Amsterdam and Melanie Mitchell from the Santa Fe Institute, examined the performance of GPT models and humans on three types of analogy problems. Importantly, the researchers also tested how well both groups handled subtle modifications to these problems

1

2

.

GPT Models' Performance on Standard vs. Modified Tasks

While GPT models showed impressive capabilities in solving standard analogy problems, they struggled significantly when faced with variations of these tasks. This contrast was particularly evident in several areas:

  1. Digit Matrices: GPT models' performance dropped noticeably when the position of the missing number was altered, whereas humans had no such difficulty

    1

    2

    .

  2. Story Analogies: GPT-4 showed a bias towards selecting the first given answer as correct, a tendency not observed in human participants. The AI also had more trouble than humans when key story elements were reworded

    1

    2

    .

  3. Simple Analogy Tasks: On simpler tasks, GPT models' performance declined with modifications, while humans maintained consistent results

    1

    2

    .

Implications for AI Understanding and Generalization

The research challenges the assumption that AI models like GPT-4 can reason in ways comparable to human cognition. Lewis explains, "This suggests that AI models often reason less flexibly than humans and their reasoning is less about true abstract understanding and more about pattern matching"

1

2

.

Critical Considerations for AI Application

These findings raise important considerations for the deployment of AI in critical decision-making domains such as education, law, and healthcare. While AI remains a powerful tool, the study emphasizes that it is not yet a suitable replacement for human reasoning and thinking

1

2

.

Future of AI and Human Cognition

The research underscores the need for continued development in AI to achieve more robust and flexible reasoning capabilities. As AI increasingly integrates into various aspects of society, understanding its limitations and strengths becomes crucial for responsible implementation and development

1

2

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo