New AGI Benchmark Stumps Leading AI Models, Highlighting Gap in General Intelligence

Curated by THEOUTPOST

On Tue, 25 Mar, 4:03 PM UTC

5 Sources

Share

The Arc Prize Foundation introduces ARC-AGI-2, a challenging new test for artificial general intelligence that current AI models, including those from OpenAI and Google, are struggling to solve. The benchmark emphasizes efficiency and adaptability, revealing limitations in current AI capabilities.

Arc Prize Foundation Introduces Challenging New AGI Benchmark

The Arc Prize Foundation, a nonprofit co-founded by prominent AI researcher François Chollet, has unveiled a new benchmark test called ARC-AGI-2, designed to measure the general intelligence of leading AI models 1. This test has proven to be significantly more challenging than its predecessor, with most current AI models struggling to achieve even single-digit scores.

Performance of Leading AI Models

The results of the ARC-AGI-2 test have been eye-opening:

  • OpenAI's o3-low model, which previously scored 75.7% on ARC-AGI-1, only managed 4% on the new test 2.
  • "Reasoning" AI models like OpenAI's o1-pro and DeepSeek's R1 scored between 1% and 1.3% 1.
  • Powerful non-reasoning models including GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Flash scored around 1% 13.
  • Pure language models (LLMs) scored 0% on the benchmark 5.

In stark contrast, a human panel achieved an average score of 60% on the test, with some individuals solving all tasks perfectly 15.

Key Features of ARC-AGI-2

The new benchmark introduces several important changes:

  1. Efficiency Metric: Unlike its predecessor, ARC-AGI-2 considers the cost and computational resources required to complete tasks 12.
  2. Adaptability: The test focuses on AI models' ability to acquire new skills efficiently and apply them to unfamiliar problems 3.
  3. Visual Pattern Recognition: Tasks involve identifying patterns in colored squares and generating correct "answer" grids 1.
  4. Contextual Rule Application: Models must interpret symbols beyond visual patterns and apply different rules based on context 5.

Implications for AGI Development

The poor performance of leading AI models on ARC-AGI-2 highlights the significant gap between current AI capabilities and human-level general intelligence. Greg Kamradt, co-founder of the Arc Prize Foundation, emphasized that intelligence is not solely about problem-solving ability but also about the efficiency of acquiring and deploying new skills 1.

This benchmark challenges the notion that brute-force computing power alone can lead to AGI. It suggests that fundamental advancements in AI architecture and learning approaches may be necessary to achieve human-like adaptability and efficiency 24.

Debate and Criticism

While many in the tech industry welcome new benchmarks to measure AI progress, some experts question the framing of these tests. Catherine Flick from the University of Staffordshire argues that performing well on such benchmarks should not be seen as a major step towards AGI, as they only assess an AI's ability to complete specific tasks rather than demonstrate true general intelligence 2.

Future of AGI Testing

The introduction of ARC-AGI-2 raises questions about the future of AGI evaluation. Joseph Imperial from the University of Bath suggests that future iterations might incorporate additional metrics, such as the minimum number of humans required to solve tasks, alongside performance and efficiency measures 2.

As the debate over AGI continues, the Arc Prize Foundation has announced a new contest challenging developers to reach 85% accuracy on the ARC-AGI-2 test while spending only $0.42 per task 1. This competition aims to drive innovation in both AI performance and efficiency, potentially bringing us closer to the elusive goal of artificial general intelligence.

Continue Reading
New AI Benchmark 'Humanity's Last Exam' Stumps Top Models,

New AI Benchmark 'Humanity's Last Exam' Stumps Top Models, Revealing Limits of Current AI

Scale AI and the Center for AI Safety have introduced a challenging new AI benchmark called 'Humanity's Last Exam', which has proven difficult for even the most advanced AI models, highlighting the current limitations of artificial intelligence.

ZDNet logoQuartz logoTechRadar logoAnalytics India Magazine logo

7 Sources

ZDNet logoQuartz logoTechRadar logoAnalytics India Magazine logo

7 Sources

AI Models Struggle with Abstract Visual Reasoning, Falling

AI Models Struggle with Abstract Visual Reasoning, Falling Short of Human Capabilities

A study by USC researchers reveals that AI models, particularly open-source ones, struggle with abstract visual reasoning tasks similar to human IQ tests. While closed-source models like GPT-4V perform better, they still fall short of human cognitive abilities.

ZDNet logoTech Xplore logoScienceDaily logoNeuroscience News logo

4 Sources

ZDNet logoTech Xplore logoScienceDaily logoNeuroscience News logo

4 Sources

FrontierMath: New AI Benchmark Exposes Limitations in

FrontierMath: New AI Benchmark Exposes Limitations in Advanced Mathematical Reasoning

Epoch AI's FrontierMath, a new mathematics benchmark, reveals that leading AI models struggle with complex mathematical problems, solving less than 2% of the challenges.

pcgamer logoArs Technica logoPhys.org logoVentureBeat logo

8 Sources

pcgamer logoArs Technica logoPhys.org logoVentureBeat logo

8 Sources

OpenAI's o3 Model Achieves Human-Level Performance on

OpenAI's o3 Model Achieves Human-Level Performance on ARC-AGI Benchmark, Sparking AGI Discussions

OpenAI's o3 model scores 85-88% on the ARC-AGI benchmark, matching human-level performance and surpassing previous AI systems, raising questions about progress towards artificial general intelligence (AGI).

Softonic logoEconomic Times logoGizmodo logoTech Xplore logo

6 Sources

Softonic logoEconomic Times logoGizmodo logoTech Xplore logo

6 Sources

AI Benchmarks Struggle to Keep Pace with Rapidly Advancing

AI Benchmarks Struggle to Keep Pace with Rapidly Advancing AI Models

As AI models like OpenAI's o3 series surpass human-level performance on various benchmarks, including complex mathematical problems, the need for more sophisticated evaluation methods becomes apparent.

Analytics India Magazine logoVox logo

2 Sources

Analytics India Magazine logoVox logo

2 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved