4 Sources
[1]
At Nvidia's GTC event, Pat Gelsinger reiterated that Jensen got lucky with AI, Intel missed the boat with Larrabee
At GTC 2025, former Intel CEO Pat Gelsinger reiterated his oft-repeated claim that Nvidia CEO Jensen Huang 'got lucky' with the AI revolution but explained his rationale more in-depth. As GPUs have taken center stage in AI innovation, Nvidia is now one of the world's most valuable companies, while Intel is struggling. But it was not always this way. Fifteen to twenty years ago, Intel CPUs were the dominant force in computing as they handled all major workloads. During this time, Intel missed its AI and HPC opportunities with Larrabee, a project that attempted to build a GPU using the x86 CPU ISA. In contrast, Nvidia bet on purebred GPUs, said Pat Gelsinger, former CTO and CEO of Intel, while speaking at Nvidia's GTC 2025. "The CPU was the king of the hill [in the mid-2000s], and I applaud Jensen for his tenacity of just saying, 'No, I am not trying to build one of those; I am trying to deliver against the workload starting in graphics," said Gelsinger. "You know, it became this broader view. And then he got lucky with AI, and one time I was debating with him, he said, 'No, I got really lucky with AI workload because it just demanded that type of architecture.' That is where the center of application development is [right now]. One of the reasons why Larrabee was canceled as a GPU in 2009 was that it was not competitive as a graphics processor against AMD's and Nvidia's graphics solutions at the time. To some extent, this was due to Intel's desire for Larrabee to feature ultimate programmability, which led to its lack of crucial fixed-function GPU parts such as raster operations units. This affected performance and increased the complexity of software development. "I had a project that was well known in the industry called Larrabee and which was trying to bridge the programmability of the CPU with a throughput oriented architecture [of a GPU], and I think had Intel stay on that path, you know, the future could have been different," said Gelsinger during a webcast. "I give Jensen a lot of credit [as] he just stayed true to that throughput computing or accelerated [vision]." Unlike GPUs from AMD and Nvidia, which use proprietary instruction set architectures (ISAs), Intel's Larrabee used the x86 ISA with Larrabee-specific extensions. This provided an advantage for parallelized general-purpose computing workloads but was a disadvantage for graphics applications. As a result, Larrabee was reintroduced as the Xeon Phi processor, first aimed at supercomputing workloads in 2010. However, it gained little traction as traditional GPU architectures gained general-purpose computing capabilities via the CUDA framework, as well as the OpenCL/Vulkan and DirectCompute APIs, which were easier to scale in terms of performance. After the Xeon Phi 'Knights Mill' failed to meet expectations, Intel dropped the Xeon Phi project in favor of data center GPUs for HPC and specialized ASICs for AI between 2018 and 2019. To a large extent, Larrabee and its successors in the Xeon Phi line failed because they were based on a CPU ISA that did not scale well for graphics, AI, or HPC. Larrabee's failure was set in motion in the mid-2000s when CPUs were still dominant, and Intel's technical leads thought that x86 was a way to go. Fast forward to today, and Intel's attempts at adopting a more conventional GPU design for AI have largely failed, with the company recently canceling its Falcon Shores GPUs for data centers. Instead, the company is pinning its hopes on its next-gen Jaguar Shores that isn't slated for release until next year.
[2]
What Was Former Intel CEO Doing at NVIDIA's Flagship Event?
Pat Gelsinger disagrees with Jensen Huang on quantum computing. At NVIDIA's GTC 2025 event on Tuesday, the company delivered a variety of new advancements across AI hardware, personal supercomputers, self-driving cars, and humanoid robots. Moreover, the event took an unexpected turn when an unlikely guest made an appearance. Surely, if Pat Gelsinger was still the CEO of Intel, there's no way he'd be seen mingling with CEO Jensen Huang at an NVIDIA event. That said, Gelsinger certainly didn't hold back and offered a few strong takes on the industry. He participated in a panel discussion alongside the hosts of the Acquired podcast and several other industry experts. While Gelsinger applauded NVIDIA's accomplishments in the present era of AI, he disagreed with Huang on certain key issues -- specifically, the timeline for the arrival of quantum computing and the use of GPUs for inference. Gelsinger, who is notably bullish on quantum computing, stated that it could be realised within the next few years. This stands in contrast to Huang's comments earlier this year, where he said that bringing "very useful quantum computers" to market could take anywhere from 15 to 30 years. His statements triggered a massive selloff in the quantum computing sector, wiping out approximately $8 billion in market value. "I disagree with Jensen," said Gelsinger, adding that the data centres of the future will have quantum processing units (QPUs) handling workloads, along with GPUs and CPUs. Similar to how GPUs are deployed to handle tasks for training AI models in language and human-like behaviour, Gelsinger believes it is only appropriate to have a quantum computing model for the complex parts of humanity. "Most interesting things in humanity are quantum effects," he said. He added that many unsolved problems today run on quantum effects, and quantum computers would help realise many ideas like superconducting, composite materials, cryogenics and medical breakthroughs, among others. "That's why this is a thrilling time to be a technologist. I just wish I was 20 years younger to be doing more," he said. While Gelsinger differs from Huang, he shares an optimistic view with Microsoft co-founder Bill Gates and Google. "There is a possibility that he (Huang) could be wrong. There is the possibility in the next three to five years that one of these techniques would get enough true logical qubits to solve some very tough problems," said Gates to Yahoo Finance. Besides, even Microsoft and Amazon have already taken major strides in quantum computing within the first three months of the year. On the flipside, Meta CEO Mark Zuckerberg resonated with Huang. "My understanding is that [quantum computing] is still ways off from being a very useful paradigm," Zuckerberg had said in a podcast episode a few months ago. Ironically, NVIDIA does seem to have huge plans for quantum computing. The company announced at the GTC event that it is building a Boston-based research centre to advance quantum computing. Besides, Gelsinger clarified that he isn't a fan of GPUs for AI model inference -- the process in which a pre-trained AI model applies its learnings to generate outputs. He reflected on the early days when a CPU, or a cluster of them, was the undisputed "king of the hill" for running workloads on computer systems. When Huang decided to use a graphics device (GPU) for the same purpose, Gelsinger said that, in the end, he "got lucky" with AI. While he acknowledged that AI and machine learning algorithms demand the GPU architecture, which is where most of the developments are being made today, he also pointed out, "There's a lot more to be done, and I'm not sure all of those are going to land on GPUs in the future." While GPUs work well for training, Gelsinger added that there needs to be a more optimised solution for inference. "A GPU is way too expensive. I argue it's 10,000 times too expensive to fully realise what we want to do with the deployment of inference of AI." His sentiments are also reflected by the growing ecosystem of inference-specific hardware that is overcoming the inefficiencies posed by GPUs. Companies like Groq, Cerebras, and SambaNova have achieved tangible and useful real-world results for providing high-speed inference. For instance, French AI startup Mistral recently dubbed its app 'Le Chat' the fastest AI assistant by deploying inference on Cerebras' hardware. Even Huang has acknowledged this in the past. In a podcast episode last year, he said that one of the company's challenges is to provide efficient, high-speed inference. Having said that, companies working on AI inference hardware may not compete with NVIDIA after all. Jonathan Ross, CEO of Groq, said, "Training should be done on GPUs." He also suggested that NVIDIA will sell every single GPU they make for training. All things considered, Gelsinger's first outing post-resignation involved several strong statements. However, it remains clear that he's still a massive fan of Huang and the work NVIDIA has accomplished. When DeepSeek made a significant impact on NVIDIA's stock price, Gelsinger argued that the market reaction was wrong. He also revealed that he is an NVIDIA stock buyer, expressing that he was "happy" to benefit from the lower prices.
[3]
Former Intel CEO Pat Gelsinger says NVIDIA CEO Jensen Huang 'got lucky' with AI
TL;DR: Former Intel CEO Pat Gelsinger attended NVIDIA's GTC 2025, praising NVIDIA's AI success under Jensen Huang. He reflected on Intel's past missteps, like the failed Larrabee project, and noted the high costs of AI hardware. Intel plans to launch its Jaguar Shores AI GPU in 2026 to compete with NVIDIA and AMD. The former boss of Intel attended NVIDIA's AI-focused GTC 2025 conference in San Francisco, California. In a 'Live at NVIDIA GTC' video appearance, ex-Intel CEO Pat Gelsinger shared some words about NVIDIA's meteoric rise in recent years, thanks to the current AI boom. Aimed at NVIDIA CEO Jensen Huang, Geslinger says, "he got lucky with AI" - and it's not a dig. The fortunes or fall of Intel under Pat Gelsinger's stewardship have been widely reported. In 2025, the once-dominant force in consumer and enterprise processor hardware is now playing catch-up to companies like NVIDIA and AMD. Regarding NVIDIA being the leader in AI, Pat Gelsinger also talked about Intel's failed Larrabee project, which attempted to bring GPU-like acceleration for things like AI to a traditional x86 CPU. "The CPU was the king of the hill, and I applaud Jensen for his tenacity of just saying, 'No, I am not trying to build one of those'; I am trying to deliver against the workload starting in graphics," said Gelsinger. "You know, it became this broader view," Geslinger added. "And then he got lucky with AI, and one time I was debating with him, he said, 'No, I got really lucky with AI workload because it just demanded that type of architecture.'" "I had a project that was well known in the industry called Larrabee and which was trying to bridge the programmability of the CPU with a throughput-oriented architecture, and I think had Intel stayed on that path, you know, the future could have been different," Gelsinger opined during his appearance on the webcast. "I give Jensen a lot of credit; he just stayed true to that vision." In the broadcast, Pat Geslinger also discussed the rising costs of AI hardware, stating that it is "way too expensive"- 10,000 times more than it needs to be for AI and large-scale inferencing to reach its full potential. As for Intel, the next-generation Jaguar Shores project in its AI GPU division will arrive sometime in 2026, competing against next-gen offerings from both NVIDIA and AMD.
[4]
Intel's Former CEO Pat Gelsinger Claims NVIDIA's AI GPUs Are 10,000 Times More Expensive Than What Is Needed For AI Inferencing
Team Blue's former CEO Pat Gelsinger takes a dig at NVIDIA's current approach to AI GPU pricing, claiming that the market will eventually realize this. Well, it seems like Gelsinger isn't too fond of NVIDIA's progress in the AI market, and given Intel's performance in this particular segment, it does seem justified. In an Acquired podcast at NVIDIA's GTC 2025 venue, Gelsinger was asked about how NVIDIA's CEO, Jensen Huang, came to realize how big AI actually is, and in response, not only did he say that Jensen got lucky with AI, but also expressed that NVIDIA's current hardware stack is too expensive for AI inferencing workloads. The CPU was the king of the hill, and I applaud Jensen for his tenacity in just saying, "No, I am not trying to build one of those; I am trying to deliver against the workload starting in graphics." And, then, he got lucky, right with AI. Today, if we think about the training workload, okay, but you have to give away something much more optimized for inferencing. You know a GPU is way too expensive; I argue it is 10,000 expensive to fully realize what we want to do with the deployment of inferencing for AI and then, of course, what's beyond that. It is safe to say here that NVIDIA was the one who stayed committed to the "long-ball" game with AI, and that is what Gelsinger has applauded Jensen for, sticking to his instincts. Intel's former CEO had previously expressed concerns about NVIDIA's AI progress, labeling CUDA as a "moat" and that the real future lies in inference, but despite such comments, Team Blue never really did manage to capitalize on the AI hype, earning far less revenue compared to what its competitors were able to. As far as where Intel stands right now, the firm has abandoned its much-anticipated "Falcon Shores" AI lineup and has bet its future on next-gen "Jaguar Shores. Moreover, Intel's Gaudi lineup has shown significantly underwhelming performance compared to counterparts like Instinct or Hopper, which makes it clear that Intel hasn't managed to find its footing in the AI market. With Intel's new CEO, Lip-Bu Tan, there's optimism towards the future of the company, but nothing is concrete as of now. Gelsinger also said that quantum processing is likely to be the next form of computing the industry will focus on in the future, and the technology might see market adoption by the end of this decade.
Share
Copy Link
At NVIDIA's GTC 2025 event, former Intel CEO Pat Gelsinger discussed the AI industry's evolution, praising NVIDIA's success while highlighting concerns about GPU costs for AI inference and expressing optimism about quantum computing's future.
At NVIDIA's GTC 2025 event, former Intel CEO Pat Gelsinger made a surprising appearance, offering insights into the evolving landscape of artificial intelligence and computing. Gelsinger, who once led Intel during a period of CPU dominance, acknowledged NVIDIA's success in the AI revolution while also highlighting missed opportunities and future challenges in the industry 12.
Gelsinger praised NVIDIA CEO Jensen Huang for his tenacity in focusing on graphics processing units (GPUs) when CPUs were still dominant. He noted that NVIDIA "got lucky with AI" as the workload demanded the type of architecture that GPUs could provide 1. This statement, while seemingly critical, was presented more as an acknowledgment of NVIDIA's foresight and commitment to its vision.
Reflecting on Intel's past, Gelsinger discussed the failed Larrabee project, which attempted to bridge CPU programmability with GPU-like throughput 13. He suggested that had Intel persisted with this approach, "the future could have been different." This admission highlights the strategic decisions that led to Intel's current position of playing catch-up in the AI hardware market.
While acknowledging NVIDIA's success in AI training, Gelsinger expressed concerns about the cost-effectiveness of GPUs for AI inference:
"A GPU is way too expensive. I argue it's 10,000 times too expensive to fully realize what we want to do with the deployment of inference of AI," Gelsinger stated 24.
This critique points to potential opportunities for more specialized and cost-effective hardware solutions for AI inference tasks.
Gelsinger also shared his optimistic view on quantum computing, disagreeing with Jensen Huang's timeline for its practical implementation:
"I disagree with Jensen," Gelsinger said, predicting that quantum processing units (QPUs) could be part of data centers within the next few years, alongside GPUs and CPUs 2.
This perspective aligns with other industry leaders like Bill Gates and contrasts with more conservative estimates from figures like Mark Zuckerberg.
Despite his critiques, Gelsinger remains a "massive fan" of Huang and NVIDIA's accomplishments 2. He even revealed that he is an NVIDIA stock buyer, benefiting from recent market fluctuations.
As for Intel, the company is pinning its hopes on the next-generation Jaguar Shores AI GPU, slated for release in 2026 34. This project represents Intel's attempt to regain ground in the competitive AI hardware market.
Gelsinger's comments at GTC 2025 highlight the rapid evolution of the AI hardware landscape and the ongoing competition between major tech companies. As the industry continues to grow, questions about hardware efficiency, cost-effectiveness, and the potential of emerging technologies like quantum computing will likely shape its future direction.
Apple is reportedly in talks with OpenAI and Anthropic to potentially use their AI models to power an updated version of Siri, marking a significant shift in the company's AI strategy.
22 Sources
Technology
15 hrs ago
22 Sources
Technology
15 hrs ago
Microsoft unveils an AI-powered diagnostic system that demonstrates superior accuracy and cost-effectiveness compared to human physicians in diagnosing complex medical conditions.
6 Sources
Technology
23 hrs ago
6 Sources
Technology
23 hrs ago
Google announces a major expansion of AI tools in education, including Gemini for Education and NotebookLM for under-18 users, aiming to transform classroom experiences while addressing concerns about AI in learning environments.
7 Sources
Technology
15 hrs ago
7 Sources
Technology
15 hrs ago
NVIDIA's upcoming GB300 Blackwell Ultra AI servers, slated for release in the second half of 2025, are poised to become the most powerful AI servers globally. Major Taiwanese manufacturers are vying for production orders, with Foxconn securing the largest share.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
Elon Musk's AI company, xAI, has raised $10 billion through a combination of debt and equity financing to expand its AI infrastructure and development efforts.
3 Sources
Business and Economy
7 hrs ago
3 Sources
Business and Economy
7 hrs ago