Nvidia unveils Rubin chip with 5x AI performance as demand for computing power skyrockets

10 Sources

Share

Jensen Huang announced Nvidia's next generation Vera Rubin platform is in full production, delivering five times the AI computing performance of previous chips. The chipmaker faces mounting competition from rivals and its own customers while navigating strong demand from China for H200 chips amid ongoing export license considerations.

Nvidia Rubin Chip Enters Full Production With Major Performance Gains

Nvidia CEO Jensen Huang revealed at CES Las Vegas that the company's next generation data center processors, the Vera Rubin platform, are in full production and on track for deployment in the second half of 2026

1

3

. The advancement of artificial intelligence has pushed Nvidia to deliver AI chips that are 3.5 times better at training and five times better at running AI software than Blackwell, its predecessor

1

. The flagship server will contain 72 graphics units and 36 new central processors, with systems capable of being strung together into pods containing more than 1,000 Rubin chips

3

5

. These configurations could improve the efficiency of generating tokens—the fundamental unit of AI systems—by 10 times

5

.

Source: Market Screener

Source: Market Screener

Skyrocketing Demand for GPUs Driven by AI Computing Power Needs

Jensen Huang emphasized that AI computing power requirements are experiencing explosive growth, with demand for Nvidia GPUs increasing dramatically as models scale up by a factor of 10 annually

4

. "The amount of computation necessary for AI is skyrocketing. The demand for Nvidia GPUs is skyrocketing," Huang stated, describing an "intense race" to reach the next frontier of technology

4

. The growing complexity and uptake of artificial intelligence software is placing strain on existing computer resources, creating the need for substantially more capacity

1

. Nvidia emphasized that Rubin-based systems will be cheaper to operate than Blackwell versions because they'll return the same results using smaller numbers of components

1

. Microsoft, Oracle, Amazon, and Google are expected to be among the first data center operators to deploy the new hardware in the second half of 2026

1

5

.

Source: Bloomberg

Source: Bloomberg

China Demand for H200 Amid Export License Uncertainty

Nvidia faces strong China demand for H200 chips, with the Trump administration considering whether to approve license applications for shipments to the Asian nation

1

2

. Chief Financial Officer Colette Kress confirmed that license applications have been submitted and the government is deciding what it wants to do with them

1

. The H200, predecessor to the current Blackwell chip, is in high demand in China, which has alarmed China hawks across the US political spectrum

2

5

. Regardless of the level of license approval, Kress said Nvidia has enough supply to serve customers in the Asian nation without impacting the company's ability to ship to customers elsewhere in the world

1

. The situation remains complex as Nvidia would also need China's government to allow companies in the country to purchase and use the American products, with Beijing previously discouraging government agencies and companies from using an earlier design called H20

1

.

Increasing Competition in AI Accelerator Market

Nvidia confronts mounting pressure from both traditional rivals and its own customers in the AI accelerator market. Advanced Micro Devices and customers like Google are developing their own chips to challenge Nvidia's market leadership

2

3

. Google works closely with Meta Platforms and others to chip away at Nvidia's AI stronghold

5

. Less than two weeks before the CES announcement, Nvidia acquired talent and chip technology from startup Groq, including executives who were instrumental in helping Google design its own AI chips

2

5

. Huang told financial analysts the Groq deal "won't affect our core business" but could result in new products that expand its lineup

5

. While Nvidia still dominates the market for AI training, it faces far more competition in delivering the fruits of those models to hundreds of millions of users of chatbots and other technologies

3

. Nvidia also touted a new generation of networking switches with co-packaged optics technology, competing with offerings from Broadcom and Cisco Systems

3

5

.

Expanding AI Applications Beyond Data Centers

Nvidia is pushing software and hardware aimed at broadening the adoption of generative AI across the economy, including robotics, health care, and heavy industry

1

. Huang highlighted new software called Alpamayo that can help self-driving cars make decisions about which path to take and leave a paper trail for engineers to use afterward

3

5

. The company will open-source both the models and the data used to train them so automakers can make evaluations and truly trust how the models came to be

3

. Much of Huang's speech focused on how well the new chips would work for serving chatbots and other AI applications to end users, including adding a new layer of storage technology called context memory storage aimed at helping chatbots provide snappier responses to long questions and conversations

3

5

. For now, the majority of spending on Nvidia-based computers comes from the capital expenditure budgets of a handful of customers, including Microsoft, Google Cloud, and Amazon AWS

1

.

Source: New York Post

Source: New York Post

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo