Nvidia CEO Claims AI Chips Outpacing Moore's Law, Promising Faster AI Progress

2 Sources

Share

Nvidia's CEO Jensen Huang asserts that the company's AI chips are advancing faster than Moore's Law, potentially revolutionizing AI capabilities and costs.

News article

Nvidia's AI Chips Surpass Moore's Law

In a bold claim that could reshape the landscape of artificial intelligence and computing, Nvidia CEO Jensen Huang has stated that his company's AI chips are advancing at a pace that surpasses Moore's Law. This assertion, made during a keynote address at CES in Las Vegas and reiterated in an interview with TechCrunch, signals a potential paradigm shift in the progression of AI technology

1

2

.

Understanding Moore's Law and Nvidia's Claim

Moore's Law, a principle that has guided computing progress for decades, predicted that the number of transistors on computer chips would roughly double every year, leading to exponential growth in performance. However, in recent years, this law has shown signs of slowing down

1

.

Huang contends that Nvidia's AI chips are breaking this trend:

"Our systems are progressing way faster than Moore's Law," Huang stated, pointing to Nvidia's latest datacenter superchip, which is purportedly more than 30 times faster for AI inference workloads than its predecessor

1

.

The Secret Behind Nvidia's Rapid Progress

Huang attributes this accelerated progress to Nvidia's comprehensive approach to chip development:

"We can build the architecture, the chip, the system, the libraries, and the algorithms all at the same time," he explained. "If you do that, then you can move faster than Moore's Law, because you can innovate across the entire stack."

1

This strategy has apparently yielded impressive results, with Huang claiming that Nvidia's AI chips today are 1,000 times more advanced than what the company produced a decade ago

2

.

Implications for AI Progress

Rejecting notions of AI progress stalling, Huang outlined three active AI scaling laws: pre-training, post-training, and test-time compute. He emphasized the importance of test-time compute, which allows AI models more time to "think" after each question

1

2

.

Nvidia's Latest Innovation: The GB200 NVL72

During his CES keynote, Huang showcased Nvidia's latest datacenter superchip, the GB200 NVL72. This chip is claimed to be 30 to 40 times faster at running AI inference workloads than Nvidia's previous best-selling chip, the H100

1

2

.

The Future of AI Costs and Performance

Huang argues that this performance leap will make expensive AI reasoning models, like OpenAI's o3, more affordable over time. "The direct and immediate solution for test-time compute, both in performance and cost affordability, is to increase our computing capability," Huang stated

1

.

While the first versions of AI reasoning models have been expensive to run, Huang expects the trend of plummeting AI model costs to continue, driven by computing breakthroughs from hardware companies like Nvidia

1

2

.

Industry Impact and Future Prospects

Nvidia's claims come at a crucial time for the AI industry, with major AI companies such as Google, OpenAI, and Anthropic relying on its chips. As the focus in the tech industry shifts from training to inference, questions have arisen about whether Nvidia's expensive products will maintain their dominance

2

.

Huang's assertions suggest that Nvidia is not only keeping pace but setting new standards in inference performance and cost-effectiveness, potentially cementing its position as a leader in the AI hardware space for years to come.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo