Nvidia's Blackwell GPUs Dominate Latest MLPerf AI Training Benchmarks

Reviewed byNidhi Govil

6 Sources

Nvidia's new Blackwell GPUs show significant performance gains in AI model training, particularly for large language models, according to the latest MLPerf benchmarks. The results highlight Nvidia's continued dominance in AI hardware.

Nvidia's Blackwell GPUs Lead in MLPerf Training Benchmarks

Nvidia has once again demonstrated its dominance in AI hardware with its latest Blackwell GPUs, showcasing significant performance gains in the most recent MLPerf training benchmarks. The results, released by MLCommons, a nonprofit consortium of over 125 members, highlight Nvidia's continued leadership in AI model training, particularly for large language models (LLMs) 1.

Benchmark Performance and Improvements

The MLPerf Training v5.0 benchmarks included six tests covering various AI tasks, with the most resource-intensive being the LLM pre-training task. This round featured Meta's Llama 3.403B model, which is more than twice the size of the previously used GPT3 and has a four times larger context window 1.

Key performance highlights include:

  1. Nvidia's Blackwell GPUs achieved the fastest training times across all six benchmarks 3.
  2. On the new Llama 3.405B pre-training benchmark, Blackwell delivered 2.2x greater performance compared to the previous generation architecture at the same scale 4.
  3. For the Llama 2 70B LoRA fine-tuning benchmark, Nvidia DGX B200 systems with eight Blackwell GPUs showed 2.5x more performance than the previous round's submission with the same number of GPUs 3.

Scaling and Efficiency

The benchmarks also demonstrated impressive scaling capabilities:

  1. In the fastest results, 2,496 Blackwell chips completed the training test in 27 minutes 2.
  2. It required more than three times as many of Nvidia's previous generation chips to achieve a faster time 5.
  3. The performance scaling with more GPUs was notably close to linear, achieving 90% of the ideal performance 1.

Technological Advancements

Nvidia's performance improvements are attributed to several factors:

  1. The NVL72 package, which efficiently connects 36 Grace CPUs and 72 Blackwell GPUs 1.
Source: IEEE Spectrum

Source: IEEE Spectrum

  1. Advancements in the Blackwell architecture, including high-density liquid-cooled racks and 13.4TB of coherent memory per rack 3.
Source: NVIDIA Blog

Source: NVIDIA Blog

  1. Fifth-generation Nvidia NVLink and NVLink Switch interconnect technologies for scale-up 3.
  2. Nvidia Quantum-2 InfiniBand networking for scale-out capabilities 3.

Industry Implications and Future Outlook

The benchmark results underscore Nvidia's vision for "AI factories" – large-scale computing infrastructures designed to train and deploy next-generation AI applications 3. This concept aligns with the industry trend of creating smaller, more efficient GPU clusters for specific AI training tasks, as noted by Chetan Kapoor, chief product officer at CoreWeave 2.

While Nvidia maintains its lead, competitors are not far behind. AMD's latest Instinct MI325X GPU demonstrated performance on par with Nvidia's H200s in the LLM fine-tuning benchmark, suggesting they are about one generation behind Nvidia 1.

As the AI hardware landscape continues to evolve, these benchmarks provide crucial insights into the capabilities of different chip architectures and their potential impact on the development of increasingly sophisticated AI models and applications.

Explore today's top stories

OpenAI Uncovers Widespread Chinese Use of ChatGPT for Covert Operations

OpenAI reports an increase in Chinese groups using ChatGPT for various covert operations, including social media manipulation, cyber operations, and influence campaigns. The company has disrupted multiple operations originating from China and other countries.

Reuters logoengadget logo9to5Mac logo

7 Sources

Technology

12 hrs ago

OpenAI Uncovers Widespread Chinese Use of ChatGPT for

Palantir CEO Alex Karp Warns of AI Dangers and US-China AI Race

Palantir CEO Alex Karp emphasizes the dangers of AI and the critical nature of the US-China AI race, highlighting Palantir's role in advancing US interests in AI development.

CNBC logoNBC News logoNew York Post logo

3 Sources

Technology

12 hrs ago

Palantir CEO Alex Karp Warns of AI Dangers and US-China AI

Microsoft Hits Record High as AI Investments Pay Off

Microsoft's stock reaches a new all-time high, driven by its strategic AI investments and strong market position in cloud computing and productivity software.

Bloomberg Business logoCNBC logoQuartz logo

3 Sources

Business and Economy

12 hrs ago

Microsoft Hits Record High as AI Investments Pay Off

Tech Giants' Indirect Emissions Soar 150% in Three Years Due to AI Expansion, UN Report Reveals

A UN report highlights a significant increase in indirect carbon emissions from major tech companies due to the energy demands of AI-powered data centers, raising concerns about the environmental impact of AI expansion.

Reuters logoFast Company logoMarket Screener logo

3 Sources

Technology

12 hrs ago

Tech Giants' Indirect Emissions Soar 150% in Three Years

WhatsApp to Introduce AI Chatbot Creation Feature for Users

WhatsApp is testing a new feature that allows users to create their own AI chatbots within the app, similar to OpenAI's Custom GPTs and Google Gemini's Gems.

9to5Mac logoMacRumors logo

2 Sources

Technology

20 hrs ago

WhatsApp to Introduce AI Chatbot Creation Feature for Users
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Twitter logo
Instagram logo
LinkedIn logo