Study Reveals AI Language Models Learn Like Humans, But Lack Abstract Thought

2 Sources

Share

A new study finds that large language models (LLMs) like GPT-J generate language through analogy rather than fixed grammatical rules, similar to humans. However, unlike humans, LLMs don't form mental dictionaries and rely heavily on memorized examples.

News article

AI Language Models Mirror Human Learning, But With Key Differences

A groundbreaking study led by researchers from the University of Oxford and the Allen Institute for AI (AI2) has revealed that large language models (LLMs), the AI systems powering chatbots like ChatGPT, learn and generalize language patterns in a surprisingly human-like manner. The research, published in the Proceedings of the National Academy of Sciences, challenges prevailing assumptions about how these AI models process language

1

.

Analogical Reasoning Over Grammatical Rules

The study focused on GPT-J, an open-source LLM developed by EleutherAI in 2021. Researchers compared its performance to human judgments on a common English word formation pattern: turning adjectives into nouns by adding "-ness" or "-ity" suffixes

2

.

Key findings include:

  1. LLMs rely on analogy rather than strict grammatical rules when generating language.
  2. When faced with made-up adjectives like "friquish" or "cormasive," the AI model based its choices on similarities to words it had encountered during training.
  3. The AI's behavior closely resembled human analogical reasoning, challenging the assumption that LLMs primarily infer rules from training data.

Frequency and 'Memory' in AI Language Processing

The research uncovered subtle influences of word frequency in the AI's training data:

  1. The LLM's responses to nearly 50,000 real English adjectives matched statistical patterns in its training data with high precision.
  2. The AI behaved as if it had formed a memory trace for every word encountered during training.
  3. When dealing with unfamiliar words, the AI appeared to ask itself, "What does this remind me of?"

    1

Key Differences Between Human and AI Language Processing

While the study revealed similarities between human and AI language processing, it also highlighted crucial differences:

  1. Humans develop a mental dictionary of meaningful words in their language, regardless of frequency.
  2. People easily recognize non-existent words and make analogical generalizations based on known words in their mental dictionaries.
  3. LLMs, in contrast, generalize directly over all specific instances in the training set without unifying them into a single dictionary entry

    2

    .

Implications for AI Development and Understanding

Janet Pierrehumbert, Professor of Language Modelling at Oxford University and senior author of the study, noted, "Although LLMs can generate language in a very impressive manner, it turns out that they do not think as abstractly as humans do. This probably contributes to the fact that their training requires so much more language data than humans need to learn a language"

1

.

Dr. Valentin Hofman, co-lead author from AI2 and the University of Washington, emphasized the study's significance in bridging linguistics and AI research. He stated, "The findings give us a clearer picture of what's going on inside LLMs when they generate language, and will support future advances in robust, efficient, and explainable AI"

2

.

This research provides valuable insights into the inner workings of AI language models and highlights areas for potential improvement in making AI systems more efficient and human-like in their language processing capabilities.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo