Study Reveals AI Language Models Learn Like Humans, But Lack Abstract Thought

2 Sources

A new study finds that large language models (LLMs) like GPT-J generate language through analogy rather than fixed grammatical rules, similar to humans. However, unlike humans, LLMs don't form mental dictionaries and rely heavily on memorized examples.

News article

AI Language Models Mirror Human Learning, But With Key Differences

A groundbreaking study led by researchers from the University of Oxford and the Allen Institute for AI (AI2) has revealed that large language models (LLMs), the AI systems powering chatbots like ChatGPT, learn and generalize language patterns in a surprisingly human-like manner. The research, published in the Proceedings of the National Academy of Sciences, challenges prevailing assumptions about how these AI models process language 1.

Analogical Reasoning Over Grammatical Rules

The study focused on GPT-J, an open-source LLM developed by EleutherAI in 2021. Researchers compared its performance to human judgments on a common English word formation pattern: turning adjectives into nouns by adding "-ness" or "-ity" suffixes 2.

Key findings include:

  1. LLMs rely on analogy rather than strict grammatical rules when generating language.
  2. When faced with made-up adjectives like "friquish" or "cormasive," the AI model based its choices on similarities to words it had encountered during training.
  3. The AI's behavior closely resembled human analogical reasoning, challenging the assumption that LLMs primarily infer rules from training data.

Frequency and 'Memory' in AI Language Processing

The research uncovered subtle influences of word frequency in the AI's training data:

  1. The LLM's responses to nearly 50,000 real English adjectives matched statistical patterns in its training data with high precision.
  2. The AI behaved as if it had formed a memory trace for every word encountered during training.
  3. When dealing with unfamiliar words, the AI appeared to ask itself, "What does this remind me of?" 1

Key Differences Between Human and AI Language Processing

While the study revealed similarities between human and AI language processing, it also highlighted crucial differences:

  1. Humans develop a mental dictionary of meaningful words in their language, regardless of frequency.
  2. People easily recognize non-existent words and make analogical generalizations based on known words in their mental dictionaries.
  3. LLMs, in contrast, generalize directly over all specific instances in the training set without unifying them into a single dictionary entry 2.

Implications for AI Development and Understanding

Janet Pierrehumbert, Professor of Language Modelling at Oxford University and senior author of the study, noted, "Although LLMs can generate language in a very impressive manner, it turns out that they do not think as abstractly as humans do. This probably contributes to the fact that their training requires so much more language data than humans need to learn a language" 1.

Dr. Valentin Hofman, co-lead author from AI2 and the University of Washington, emphasized the study's significance in bridging linguistics and AI research. He stated, "The findings give us a clearer picture of what's going on inside LLMs when they generate language, and will support future advances in robust, efficient, and explainable AI" 2.

This research provides valuable insights into the inner workings of AI language models and highlights areas for potential improvement in making AI systems more efficient and human-like in their language processing capabilities.

Explore today's top stories

Google Unveils AI-Powered Pixel 10 Smartphones with Advanced Gemini Features

Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.

Bloomberg Business logoThe Register logoReuters logo

20 Sources

Technology

3 hrs ago

Google Unveils AI-Powered Pixel 10 Smartphones with

Google Unveils AI-Powered Pixel 10 Series: A New Era of Smartphone Intelligence

Google's Pixel 10 series introduces groundbreaking AI features, including Magic Cue, Camera Coach, and Voice Translate, powered by the new Tensor G5 chip and Gemini Nano model.

TechCrunch logoZDNet logoengadget logo

12 Sources

Technology

3 hrs ago

Google Unveils AI-Powered Pixel 10 Series: A New Era of

NASA and IBM Unveil Surya: An AI Model to Predict Solar Flares and Space Weather

NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.

New Scientist logoengadget logoGizmodo logo

6 Sources

Technology

11 hrs ago

NASA and IBM Unveil Surya: An AI Model to Predict Solar

Google Unveils Pixel Watch 4: A Leap Forward in AI-Powered Wearables

Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, enhanced AI features, and improved health tracking capabilities.

TechCrunch logoCNET logoZDNet logo

17 Sources

Technology

3 hrs ago

Google Unveils Pixel Watch 4: A Leap Forward in AI-Powered

FieldAI Secures $405M Funding to Revolutionize Robot Intelligence with Physics-Based AI Models

FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.

TechCrunch logoReuters logoGeekWire logo

7 Sources

Technology

3 hrs ago

FieldAI Secures $405M Funding to Revolutionize Robot
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo