OpenAI's o3 Model Scores Lower on FrontierMath Benchmark Than Initially Claimed

3 Sources

Share

OpenAI's o3 AI model scores 10% on the FrontierMath benchmark, significantly lower than the 25% initially claimed. The discrepancy raises questions about AI benchmark transparency and testing practices in the industry.

News article

OpenAI's o3 Model Underperforms on FrontierMath Benchmark

OpenAI's recently released o3 AI model has scored significantly lower on the FrontierMath benchmark than initially claimed, sparking discussions about transparency and testing practices in the AI industry. Epoch AI, the research institute behind FrontierMath, reported that the publicly available o3 model achieved a score of around 10% on their challenging mathematical test, far below the 25% OpenAI had suggested during the model's unveiling in December

1

.

The Discrepancy Explained

The discrepancy between OpenAI's claim and Epoch AI's findings can be attributed to several factors:

  1. Different model versions: The ARC Prize Foundation confirmed that the released o3 model differs from the version tested in December 2024, being "tuned for chat/product use"

    2

    .

  2. Compute settings: OpenAI's chief research officer, Mark Chen, had mentioned achieving over 25% in "aggressive test-time compute settings," suggesting that the publicly released model may use less computational power

    1

    .

  3. Testing methodology: Epoch AI noted that their testing setup likely differs from OpenAI's, and they used an updated version of FrontierMath for their evaluations

    1

    .

Industry-wide Implications

This incident highlights broader issues within the AI industry:

  1. Benchmark reliability: The discrepancy underscores the need for caution when interpreting AI benchmarks, especially those used to promote commercial products

    2

    .

  2. Transparency concerns: The AI community has raised questions about OpenAI's transparency and model testing practices

    1

    .

  3. Recurring controversies: Similar benchmarking controversies have recently affected other AI companies, including xAI and Meta, suggesting an industry-wide issue

    2

    .

OpenAI's Response and Future Plans

While the lower score on FrontierMath may seem disappointing, OpenAI has addressed the situation:

  1. Wenda Zhou from OpenAI explained that the production o3 model is "more optimized for real-world use cases" and speed, which may result in benchmark disparities

    2

    .

  2. OpenAI plans to release a more powerful variant, o3-pro, in the coming weeks, which may potentially achieve higher benchmark scores

    1

    .

  3. Interestingly, OpenAI's o3-mini-high and o4-mini models already outperform o3 on FrontierMath, suggesting continued progress in their AI development

    1

    .

The Importance of FrontierMath

FrontierMath has gained significance in the AI community due to its challenging nature:

  1. The test is considered tamper-proof, developed by over 70 mathematicians with new, unpublished problems

    3

    .

  2. Prior to o3, no AI model had solved more than 9% of FrontierMath questions in a single attempt

    3

    .

  3. Despite the lower-than-claimed score, o3's 10% achievement still represents a notable advancement in AI reasoning capabilities

    3

    .

As the AI industry continues to evolve rapidly, this incident serves as a reminder of the complexities involved in benchmarking and the importance of transparent communication about AI model capabilities and limitations.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo