4 Sources
4 Sources
[1]
Exclusive: OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say
SAN FRANCISCO, Feb 2 (Reuters) - OpenAI is unsatisfied with some of Nvidia's latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom. The ChatGPT-maker's shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition. This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia's AI dominance and comes as the two companies are in investment talks. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips. The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD (AMD.O), opens new tab and others for GPUs built to rival Nvidia's. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said. On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was "nonsense" and that Nvidia planned a huge investment in OpenAI. "Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale," Nvidia said in a statement. A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference. Seven sources said that OpenAI is not satisfied with the speed at which Nvidia's hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI's inference computing needs in the future, one of the sources told Reuters. The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI's talks, one of the sources told Reuters. Nvidia's decision to snap up at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq's intellectual property was highly complementary to Nvidia's product roadmap. NVIDIA ALTERNATIVES Nvidia's graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI's efforts. The ChatGPT maker's search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users. Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot. Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex's weakness to Nvidia's GPU-based hardware, one source said. In a January 30 call with reporters, CEO Sam Altman said that customers using OpenAI's coding models will "put a big premium on speed for coding work." One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users. Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs. NVIDIA ON THE MOVE As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI announced last month. Cerebras declined to comment. Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment. But by December, Nvidia moved to license Groq's tech in a non-exclusive all-cash deal, the sources said. Although the deal would allow other companies to license Groq's technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq's chip designers. Reporting by Max A. Cherney, Krystal Hu and Deepa Seetharaman in San Francisco; editing by Kenneth Li, Peter Henderson and Nick Zieminski Our Standards: The Thomson Reuters Trust Principles., opens new tab * Suggested Topics: * Business Max A. Cherney Thomson Reuters Max A. Cherney is a correspondent for Reuters based in San Francisco, where he reports on the semiconductor industry and artificial intelligence. He joined Reuters in 2023 and has previously worked for Barron's magazine and its sister publication, MarketWatch. Cherney graduated from Trent University with a degree in history. Krystal Hu Thomson Reuters Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
[2]
OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say
The ChatGPT-maker's shift in β strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. OpenAI is unsatisfied with some of Nvidia's latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom. Budget 2026 Critics' choice rather than crowd-pleaser, Aiyar saysSitharaman's Paisa Vasool Budget banks on what money can do for you bestBudget's clear signal to global investors: India means business The ChatGPT-maker's shift in β strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition. This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia's AI dominance and comes as the two companies are in investment talks. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips. The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD and others for GPUs built to rival Nvidia's. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said. On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was "nonsense" and that Nvidia planned a huge investment in OpenAI. "Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale," Nvidia said in a statement. A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference. After the Reuters story was published, OpenAI Chief Executive Sam Altman wrote in a post on X that Nvidia makes "the best AI chips in the world" and that OpenAI hoped to remain a "gigantic customer for a very long time". Seven sources said that OpenAI is not satisfied with the speed at which Nvidia's hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI's inference computing needs in the future, one of the sources told Reuters. The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI's talks, one of the sources told Reuters. Nvidia's decision to snap up key talent at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq's intellectual property was highly complementary to Nvidia's product roadmap. Nvidia alternatives Nvidia's graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI's efforts. The ChatGPT-maker's search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users. Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot. Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer β code, which the company has been aggressively marketing, one of the β sources added. OpenAI staff attributed some of Codex's weakness to Nvidia's GPU-based hardware, one source said. In a January 30 call with reporters, Altman said that customers using OpenAI's coding models will "put a big premium on speed for coding work." One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users. Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs. Nvidia on the move As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI announced last month. Cerebras declined to comment. Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment. But by December, Nvidia moved to license Groq's tech in a non-exclusive all-cash deal, the sources said. Although the deal would allow other companies to license Groq's technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq's chip designers.
[3]
OpenAI seeks alternatives to Nvidia for AI inference, testing chipmaker's dominance
OpenAI is unsatisfied with some of Nvidia's latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom. The ChatGPT-maker's shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process by which an AI model, such as the one that powers the ChatGPT app, responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition. This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia's AI dominance and comes as the two companies are in investment talks. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips. The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD AMD.O and others for GPUs built to rival Nvidia's. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said. On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was "nonsense" and that Nvidia planned a huge investment in OpenAI. "Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale," Nvidia said in a statement. A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference. After the Reuters story was published, OpenAI Chief Executive Sam Altman wrote in a post on X that Nvidia makes "the best AI chips in the world" and that OpenAI hoped to remain a "gigantic customer for a very long time." Seven sources said that OpenAI is not satisfied with the speed at which Nvidia's hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI's inference computing needs in the future, one of the sources told Reuters. The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI's talks, one of the sources told Reuters. Nvidia's decision to snap up key talent at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq's intellectual property was highly complementary to Nvidia's product roadmap. Nvidia alternatives Nvidia's graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI's efforts. The ChatGPT-maker's search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users. Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot. Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex's weakness to Nvidia's GPU-based hardware, one source said. In a January 30 call with reporters, Altman said that customers using OpenAI's coding models will "put a big premium on speed for coding work." One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users. Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs. Nvidia on the move As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI, announced last month. Cerebras declined to comment. Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment. But by December, Nvidia moved to license Groq's tech in a non-exclusive all-cash deal, the sources said. Although the deal would allow other companies to license Groq's technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq's chip designers.
[4]
OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say
SAN FRANCISCO, Feb 2 (Reuters) - OpenAI is unsatisfied with some of Nvidia's latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom. The ChatGPT-maker's shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition. This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia's AI dominance and comes as the two companies are in investment talks. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips. The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD and others for GPUs built to rival Nvidia's. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said. On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was "nonsense" and that Nvidia planned a huge investment in OpenAI. "Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale," Nvidia said in a statement. A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference. Seven sources said that OpenAI is not satisfied with the speed at which Nvidia's hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI's inference computing needs in the future, one of the sources told Reuters. The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI's talks, one of the sources told Reuters. Nvidia's decision to snap up key talent at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq's intellectual property was highly complementary to Nvidia's product roadmap. NVIDIA ALTERNATIVES Nvidia's graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI's efforts. The ChatGPT maker's search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users. Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot. Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex's weakness to Nvidia's GPU-based hardware, one source said. In a January 30 call with reporters, CEO Sam Altman said that customers using OpenAI's coding models will "put a big premium on speed for coding work." One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users. Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs. NVIDIA ON THE MOVE As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI announced last month. Cerebras declined to comment. Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment. But by December, Nvidia moved to license Groq's tech in a non-exclusive all-cash deal, the sources said. Although the deal would allow other companies to license Groq's technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq's chip designers. (Reporting by Max A. Cherney, Krystal Hu and Deepa Seetharaman in San Francisco; editing by Kenneth Li, Peter Henderson and Nick Zieminski) By Max A. Cherney, Krystal Hu and Deepa Seetharaman
Share
Share
Copy Link
OpenAI is exploring alternative chip providers for AI inference, citing dissatisfaction with Nvidia's hardware speed for specific tasks like software development. The shift comes as a $100 billion investment deal between the two AI powerhouses remains stalled after months of negotiations, potentially reshaping the AI hardware landscape.
OpenAI is seeking alternatives to Nvidia chips for specific AI inference tasks, marking a significant test of Nvidia's dominance in AI hardware. According to eight sources familiar with the matter, the ChatGPT maker has been dissatisfied with Nvidia's latest artificial intelligence chips since last year, focusing its concerns on AI inferenceβthe process when an AI model responds to customer queries and requests
1
. While Nvidia remains dominant in chips for AI model training, inference has emerged as a new competitive front that could reshape the AI hardware landscape.
Source: Reuters
Seven sources revealed that OpenAI is not satisfied with the speed at which Nvidia's hardware delivers answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software
2
. The company needs new hardware that would eventually provide about 10% of OpenAI's inference computing needs in the future, one source told Reuters.The tension surfaces as OpenAI and Nvidia investment talks have dragged on for months. In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that would give the chipmaker a stake in the startup and provide OpenAI with cash to buy advanced chips
3
. The deal had been expected to close within weeks but instead has been bogged down by OpenAI's shifting product road map, which has changed the kind of computational resources it requires.During this period, OpenAI has struck deals with AMD and other alternative chip providers for GPUs built to rival Nvidia's offerings
4
. Despite the reported tensions, Nvidia CEO Jensen Huang brushed off concerns on Saturday, calling the idea "nonsense" and affirming that Nvidia planned a huge investment in OpenAI. Sam Altman later posted on X that Nvidia makes "the best AI chips in the world" and that OpenAI hoped to remain a "gigantic customer for a very long time."OpenAI's search for alternative chip providers since last year has focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM
1
. Squishing as much costly SRAM as possible onto each chip offers speed advantages for chatbots and other AI systems as they process requests from millions of users.AI inference tasks require more memory than training because chips spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot
2
. Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing. OpenAI staff attributed some of Codex's weakness to Nvidia's GPU-based hardware, one source said.Related Stories
The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said
3
. However, Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI's talks, one source told Reuters. Nvidia's decision to acquire Groq's intellectual property appeared to be an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said.On a January 30 call with reporters, Sam Altman said that customers using OpenAI's coding models will "put a big premium on speed for coding work." One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users
4
.Competing products such as Anthropic's Claude and Google's Gemini benefit from deployments that rely more heavily on chips Google made in-house, called TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips
1
. This shift toward specialized inference hardware signals that AI advancements increasingly focus on using trained models for inference and reasoning, which could represent a new, bigger stage of AI development.Both companies issued statements defending their relationship. Nvidia said "Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale," while an OpenAI spokesperson said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference
2
. The decision by OpenAI and other chipmakers to seek alternatives in the inference chip market marks a significant test of Nvidia's dominance in AI as computing power requirements evolve beyond traditional training needs.Summarized by
Navi
[1]
[3]
1
Business and Economy

2
Policy and Regulation

3
Technology
