2 Sources
2 Sources
[1]
Nvidia challenges Intel, AMD in CPU arena
Image: Getty Images Nvidia may have made its immense fortune on the back of specialised graphics processing units (GPUs) used to power artificial intelligence servers, but CEO Jensen Huang is increasingly professing his love for the more generalist CPU. The CPU, or central processing unit, was for decades traditionally viewed as the main brain of a computer - a product most associated with Intel INTC.O or sometimes Advanced Micro Devices AMD.O. Huang is fond of saying that where once 90 per cent of computing used to happen on CPUs and 10 per cent on chips like his, the ratio had flipped in recent years. But the CPU is now making a comeback - increasingly seen as an equivalent if not better option as AI companies shift from training their models to deploying them - a shift that Nvidia plans to be a big part of. "We love CPUs as well as GPUs," Huang said on a call with analysts on Wednesday for the company's fourth-quarter results. He assured them that Nvidia was not only ready for the CPU's return to the spotlight, but also that Nvidia's own CPU offerings for data centers, first released in 2023, would outcompete rivals. Last month, at the Consumer Electronics Show in Las Vegas in January, Huang also said the number of high-performance Nvidia CPUs being used in data centers would explode and that he wouldn't be surprised "if Nvidia becomes one of the largest CPU makers in the world." The CPU versus the GPU CPUs and GPUs have served different computing tasks for decades. CPUs are generalist chips designed to handle any mathematical task a software programmer might throw at the chip at reasonable speed, given the variety of work. GPUs, by contrast, specialise in carrying out a simpler set of mathematical tasks, but doing those simple calculations in parallel thousands of times at once. In video games that meant calculating the value of thousands of pixels on a screen many times a second, and in AI work that means multiplying and adding large matrices of numbers that developers use to represent real-world data such as words and images. AI companies are increasingly deploying "agents" that can independently carry out tasks such as writing code, sifting through documents and writing research reports - and that sort of computing "is happening more and more, and sometimes primarily, on the CPU," said Ben Bajarin, an analyst at Creative Strategies. Nvidia's current flagship AI server - called the NVL72 - contains 36 of its CPUs and 72 of its GPUs. Bajarin thinks that could change to a 1 to 1 ratio for so-called agentic work or even that the GPU could be skipped altogether. Nvidia out to prove a point Underscoring its CPU ambitions, Nvidia recently announced a deal with Meta Platforms that will see the Facebook owner use large volumes of its Grace and Vera CPU chips on a standalone basis. That's a relatively new development compared to Nvidia's current AI servers where each CPU is accompanied by multiple GPUs. Though it's not that Meta has switched vendors for CPUs - it's just securing more suppliers. Days later, AMD also announced a large deal with Meta that also included its CPUs, which Meta has been buying for years. On the call with analysts, Huang argued that Nvidia had taken a fundamentally different approach to CPUs. He outlined why Nvidia had minimised an approach to breaking up chips into smaller parts that Intel and AMD have used, saying the Nvidia CPU was able to carry out many simple tasks in a row with good access to a lot of computer memory. "It is designed to be focused on very high data processing capabilities," Huang said on the call. "And the reason for that is because most of the computing problems that we're interested in are data driven - artificial intelligence being one." Dave Altavilla, principal analyst at HotTech Vision and Analysis, said Nvidia is aiming to prove that the CPU type once supplied primarily by Intel "is no longer the assumed default foundation of modern compute infrastructure. Instead, it becomes just one architectural option among several." Huang said that Nvidia would have more to disclose about its CPUs at the company's annual developer conference in Silicon Valley next month.
[2]
Nvidia's CEO prepares investors for a renewed battle with Intel, AMD
SAN FRANCISCO, Feb 25 (Reuters) - Nvidia may have made its immense fortune on the back of specialized graphics processing units (GPUs) used to power artificial intelligence servers, but CEO Jensen Huang is increasingly professing his love for the more generalist CPU. The CPU, or central processing unit, was for decades traditionally viewed as the main brain of a computer - a product most associated with Intel or sometimes Advanced Micro Devices. Huang is fond of saying that where once 90% of computing used to happen on CPUs and 10% on chips like his, the ratio had flipped in recent years. But the CPU is now making a comeback - increasingly seen as an equivalent if not better option as AI companies shift from training their models to deploying them - a shift that Nvidia plans to be a big part of. "We love CPUs as well as GPUs," Huang said on a call with analysts on Wednesday for the company's fourth-quarter results. He assured them that Nvidia was not only ready for the CPU's return to the spotlight, but also that Nvidia's own CPU offerings for data centers, first released in 2023, would outcompete rivals. Last month, at the Consumer Electronics Show in Las Vegas in January, Huang also said the number of high-performance Nvidia CPUs being used in data centers would explode and that he wouldn't be surprised "if Nvidia becomes one of the largest CPU makers in the world." THE CPU VERSUS THE GPU CPUs and GPUs have served different computing tasks for decades. CPUs are generalist chips designed to handle any mathematical task a software programmer might throw at the chip at reasonable speed, given the variety of work. GPUs, by contrast, specialize in carrying out a simpler set of mathematical tasks, but doing those simple calculations in parallel thousands of times at once. In video games that meant calculating the value of thousands of pixels on a screen many times a second, and in AI work that means multiplying and adding large matrices of numbers that developers use to represent real-world data such as words and images. AI companies are increasingly deploying "agents" that can independently carry out tasks such as writing code, sifting through documents and writing research reports - and that sort of computing "is happening more and more, and sometimes primarily, on the CPU," said Ben Bajarin, an analyst at Creative Strategies. Nvidia's current flagship AI server - called the NVL72 - contains 36 of its CPUs and 72 of its GPUs. Bajarin thinks that could change to a 1 to 1 ratio for so-called agentic work or even that the GPU could be skipped altogether. NVIDIA OUT TO PROVE A POINT Underscoring its CPU ambitions, Nvidia recently announced a deal with Meta Platforms that will see the Facebook owner use large volumes of its Grace and Vera CPU chips on a standalone basis. That's a relatively new development compared to Nvidia's current AI servers where each CPU is accompanied by multiple GPUs. Though it's not that Meta has switched vendors for CPUs - it's just securing more suppliers. Days later, AMD also announced a large deal with Meta that also included its CPUs, which Meta has been buying for years. On the call with analysts, Huang argued that Nvidia had taken a fundamentally different approach to CPUs. He outlined why Nvidia had minimized an approach to breaking up chips into smaller parts that Intel and AMD have used, saying the Nvidia CPU was able to carry out many simple tasks in a row with good access to a lot of computer memory. "It is designed to be focused on very high data processing capabilities," Huang said on the call. "And the reason for that is because most of the computing problems that we're interested in are data driven - artificial intelligence being one." Dave Altavilla, principal analyst at HotTech Vision and Analysis, said Nvidia is aiming to prove that the CPU type once supplied primarily by Intel "is no longer the assumed default foundation of modern compute infrastructure. Instead, it becomes just one architectural option among several." Huang said that Nvidia would have more to disclose about its CPUs at the company's annual developer conference in Silicon Valley next month. (Reporting by Stephen Nellis in San Francisco; Editing by Peter Henderson and Edwina Gibbs)
Share
Share
Copy Link
Nvidia is making a bold push into the CPU market, challenging Intel and AMD's long-held dominance. CEO Jensen Huang announced that the company's Grace and Vera CPU chips will play a central role as AI workloads shift from model training to deployment, particularly for AI agents that handle tasks like coding and document analysis.
Nvidia, the company that built its fortune on graphics processing units, is now positioning itself to become a major force in the CPU market traditionally dominated by Intel and AMD. CEO Jensen Huang revealed during the company's fourth-quarter earnings call that Nvidia is ready to compete aggressively in the CPU arena, declaring "we love CPUs as well as GPUs"
1
. The shift comes as evolving AI workloads increasingly favor central processing units, particularly as companies move from training AI models to deploying them in real-world applications.Source: Market Screener
Huang has long noted that computing ratios flipped in recent years, with 90% of work moving to GPUs from the traditional CPU-dominated landscape
2
. But the pendulum is swinging back. At the Consumer Electronics Show in January, Huang predicted that Nvidia could become "one of the largest CPU makers in the world" as high-performance Nvidia CPUs proliferate in data centers1
.The resurgence of CPU importance stems from the rise of AI agents that independently execute tasks such as writing code, analyzing documents, and generating research reports. According to Ben Bajarin, an analyst at Creative Strategies, this type of agentic computing "is happening more and more, and sometimes primarily, on the CPU"
2
. This represents a fundamental shift in AI model deployment strategies.Nvidia's current flagship AI server, the NVL72, contains 36 CPUs paired with 72 GPUs. However, Bajarin suggests this ratio could shift to 1-to-1 for agentic work, or GPUs might be bypassed entirely for certain tasks
1
. This architectural evolution underscores how AI workloads are diversifying beyond the training phase that made Nvidia's GPUs indispensable.Nvidia's approach to data center CPUs differs fundamentally from the Nvidia vs Intel and AMD playbook. Huang explained that Nvidia minimized the chiplet approach—breaking chips into smaller components—that competitors favor. Instead, Nvidia's processors are designed for "very high data processing capabilities" with superior memory access
2
. "Most of the computing problems that we're interested in are data driven—artificial intelligence being one," Huang stated during the analyst call.The company recently secured a significant deal with Meta Platforms, which will deploy large volumes of Nvidia Grace and Vera CPUs on a standalone basis—a departure from Nvidia's typical configuration where CPUs accompany multiple GPUs
1
. While Meta hasn't abandoned AMD, which also announced a major CPU deal with the social media giant, the partnership signals growing confidence in Nvidia's processor offerings.
Source: Gulf Business
Related Stories
Dave Altavilla, principal analyst at HotTech Vision and Analysis, suggests Nvidia aims to prove that Intel's traditional CPU dominance "is no longer the assumed default foundation of modern compute infrastructure. Instead, it becomes just one architectural option among several"
2
. This repositioning could reshape data center economics as customers evaluate whether specialized, data-optimized processors deliver better performance for AI workloads than general-purpose alternatives.Nvidia first released its data center CPU offerings in 2023, making this a relatively young product line compared to Intel and AMD's decades of processor development. Yet the company's dominance in AI infrastructure—and its deep relationships with hyperscalers and AI companies—provides a formidable distribution advantage. Huang promised more CPU disclosures at Nvidia's annual developer conference in Silicon Valley next month
1
, suggesting the company has additional announcements prepared to solidify its position in this expanding market segment.Summarized by
Navi
[1]
[2]
26 Jan 2026•Technology
19 Sept 2025•Technology

07 Jan 2025•Technology

1
Technology

2
Policy and Regulation

3
Science and Research
