10 Sources
10 Sources
[1]
Nvidia Sees Strong Chinese Demand for H200, Has Enough Supply
Nvidia Corp. said it has seen strong demand from customers in China for the H200 chip that the Trump administration has said it will consider letting the chipmaker ship to that country. License applications have been submitted and the government is deciding what it wants to do with them, Chief Financial Officer Colette Kress told analysts Monday during a meeting at the CES trade show in Las Vegas. Nvidia Chief Executive Officer Jensen Huang described the demand as strong. Regardless of the level of license approval, Kress said, Nvidia has enough supply to serve customers in the Asian nation without impacting the company's ability to ship to customers elsewhere in the world. Nvidia would also need China's government to allow companies in the country to purchase and use the American products. Beijing previously discouraged government agencies and companies there from using an earlier, less powerful design, called H20. Nvidia Corp. Chief Executive Officer Jensen Huang said that the company's highly anticipated Rubin data center processors are in production and customers will soon be able to try out the technology. All six of the chips for a new generation of computing equipment -- named after astronomer Vera Rubin -- are back from manufacturing partners and on track for deployment by customers in the second half of the year, Huang said at the CES trade show in Las Vegas Monday. "Demand is really high," he said. The growing complexity and uptake of artificial intelligence software is placing a strain on existing computer resources, creating the need for much more, Huang said. Nvidia, based in Santa Clara, California, is seeking to maintain its edge as the leading maker of artificial intelligence accelerators, the chips used by data center operators to develop and run AI models. Some on Wall Street have expressed concern that competition is mounting for Nvidia -- and that AI spending can't continue at its current pace. Data center operators also are developing their own AI accelerators. But Nvidia has maintained bullish long-term forecasts that point to a total market in the trillions of dollars. Rubin is Nvidia's latest accelerator and is 3.5 times better at training and five times better at running AI software than its predecessor, Blackwell, the company said. A new central processing unit has 88 cores -- the key data-crunching elements -- and provides twice the performance of the component that it's replacing. The company is giving details of its new products earlier in the year than it typically does -- part of a push to keep the industry hooked on its hardware, which has underpinned an explosion in AI use. Nvidia usually dives into product details at its spring GTC event in San Jose, California. Even while talking up new offerings, Nvidia said previous generations of products are still performing well. The company also has seen strong demand from customers in China for the H200 chip that the Trump administration has said it will consider letting the chipmaker ship to that country. License applications have been submitted, and the US government is deciding what it wants to do with them, Chief Financial Officer Colette Kress told analysts. Regardless of the level of license approval, Kress said, Nvidia has enough supply to serve customers in the Asian nation without affecting the company's ability to ship to customers elsewhere in the world. For Huang, CES is yet another stop on his marathon run of appearances at events, where he's announced products, tie-ups and investments all aimed at adding momentum to the deployment of AI systems. His counterpart at Nvidia's closest rival, Advanced Micro Devices Inc.'s Lisa Su, was slated to give a keynote presentation at the show later Monday. The new hardware, which also includes networking and connectivity components, will be part of its DGX SuperPod supercomputer while also being available as individual products for customers to use in a more modular way. The step-up in performance is needed because AI has shifted to more specialized networks of models that not only sift through massive amounts of inputs but need to solve particular problems through multistage processes. The company emphasized that Rubin-based systems will be cheaper to run than Blackwell versions because they'll return the same results using smaller numbers of components. Microsoft Corp. and other large providers of remote computing will be among the first to deploy the new hardware in the second half of the year, Nvidia said. For now, the majority of spending on Nvidia-based computers is coming from the capital expenditure budgets of a handful of customers, including Microsoft, Alphabet Inc.'s Google Cloud and Amazon.com Inc.'s AWS. Nvidia is pushing software and hardware aimed at broadening the adoption of AI across the economy, including robotics, health care and heavy industry. As part of that effort, Nvidia announced a group of tools designed to accelerate development of autonomous vehicles and robots.
[2]
Nvidia CEO Huang to take stage at CES in Las Vegas as competition mounts
LAS VEGAS, Jan 5 (Reuters) - Nvidia (NVDA.O), opens new tab CEO Jensen Huang is set to give a speech on Monday at the Consumer Electronics Show in Las Vegas, potentially revealing new details about product plans for the world's most valuable listed company as it faces increasing competition from both rivals and its own customers. Less than two weeks ago, the company scooped up talent and chip technology from startup Groq, including executives who were instrumental in helping Alphabet's (GOOGL.O), opens new tab Google design its own AI chips. While Google is a major Nvidia customer, its own chips have emerged as one of Nvidia's biggest threats as Google works closely with Meta Platforms and others to chip away at Nvidia's AI stronghold. At the same time, Nvidia is eager to show that its latest products can outperform older chips like the H200, which U.S. President Donald Trump has allowed to flow to China. Reuters has reported that the chip, which was the predecessor to Nvidia's current flagship "Blackwell" chip, is in high demand in China, which has alarmed China hawks across the U.S. political spectrum. Huang's speech is scheduled to begin at 4 p.m. EST (2100 GMT). Other key speakers at the annual trade show this year include AMD (AMD.O), opens new tab CEO Lisa Su, the CEO of Finnish health tech company Oura, Tom Hale, and PC maker Lenovo's (0992.HK), opens new tab CEO, Yuanqing Yang. Reporting by Stephen Nellis in Las Vegas; Editing by Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
Nvidia announces new, more powerful Vera Rubin chip made for AI
Next generation of chips in 'full production' and will arrive later this year, Jensen Huang says at CES in Las Vegas Nvidia CEO Jensen Huang said on Monday that the company's next generation of chips is in "full production" saying they can deliver five times the artificial-intelligence computing of the company's previous chips when serving up chatbots and other AI apps. In a speech at the Consumer Electronics Show in Las Vegas, the leader of the world's most valuable company revealed new details about its chips, which will arrive later this year and which Nvidia executives are in the company's labs being tested by AI firms, as Nvidia faces increasing competition from rivals as well as its own customers. The Vera Rubin platform, made up of six separate Nvidia chips, is expected to debut later this year, with the flagship server containing 72 of the company's graphics units and 36 of its new central processors. Huang showed how they can be strung together into "pods" with more than 1,000 Rubin chips and said they could improve the efficiency of generating what are known as "tokens" - the fundamental unit of AI systems - by 10 times. To get the new performance results, however, Huang said the Rubin chips use a proprietary kind of data that the company hopes the wider industry will adopt. "This is how we were able to deliver such a gigantic step up in performance, even though we only have 1.6 times the number of transistors," Huang said. While Nvidia still dominates the market for training AI models, it faces far more competition - from traditional rivals such as Advanced Micro Devices as well as customers like Alphabet's Google - in delivering the fruits of those models to hundreds of millions of users of chatbots and other technologies. Much of Huang's speech focused on how well the new chips would work for that task, including adding a new layer of storage technology called "context memory storage" aimed at helping chatbots provide snappier responses to long questions and conversations. Nvidia also touted a new generation of networking switches with a new kind of connection called co-packaged optics. The technology, which is key to linking together thousands of machines into one, competes with offerings from Broadcom and Cisco Systems. In other announcements, Huang highlighted new software that can help self-driving cars make decisions about which path to take - and leave a paper trail for engineers to use afterward. Nvidia showed research about software, called Alpamayo, late last year, with Huang saying on Monday it would be released more widely, along with the data used to train it so that automakers can make evaluations. "Not only do we open-source the models, we also open-source the data that we use to train those models, because only in that way can you truly trust how the models came to be," Huang said from a stage in Las Vegas. Last month, Nvidia scooped up talent and chip technology from startup Groq, including executives who were instrumental in helping Alphabet's Google design its own AI chips. While Google is a major Nvidia customer, its own chips have emerged as one of Nvidia's biggest threats as Google works closely with Meta Platforms and others to chip away at the company's lead. During a question-and-answer session with financial analysts after his speech, Huang said the Groq deal "won't affect our core business" but could result in new products that expand its lineup. At the same time, Nvidia is eager to show that its latest products can outperform older chips like the H200, which Donald Trump has allowed to flow to China. The chip, which was the predecessor to Nvidia's current "Blackwell" chip, is in high demand in China, which has alarmed China hawks across the US political spectrum.
[4]
Nvidia CEO Jensen Huang Says AI Is Skyrocketing Demand For GPUs
Nvidia CEO Jensen Huang said demand for computing resources is "skyrocketing" due to the rapid advancement of artificial intelligence models, calling it an "intense race" to the next frontier of the tech. In a Monday Nvidia live event in Las Vegas, Huang discussed a host of developments for the company ahead of 2026, as he pointed to the strong competition in the artificial intelligence sector. Commenting on the growth of AI since it first hit the market, Huang said that everyone has been fighting to be the first to hit the next level of the tech. "The amount of computation necessary for AI is skyrocketing. The demand for Nvidia GPUs is skyrocketing. It's skyrocketing because models are increasing by a factor of 10, an order of magnitude every single year," he said, adding: "Everybody's trying to get to the next level and somebody is getting to the next level. And so therefore, all of it is a computing problem. The faster you compute, the sooner you can get to the next level of the next frontier." The surging growth and adoption of AI have already seen a host of Bitcoin (BTC) mining companies either fully or partially pivot to the sector over the past couple of years. Related: Here's what AI models predict for Bitcoin and altcoin price ranges in 2026 This has partly been due to the Bitcoin mining difficulty increasing over time, meanwhile AI also presents an opportunity for miners to maximize their resources and potentially earn greater revenues outside of BTC. More demand for AI computing power could make a pivot to AI computing even more enticing for Bitcoin miners. During his speech, the Nvidia CEO also discussed the firm's next-generation Rubin Vera chips, stating they are currently in "full production" and are on schedule. Huang said the combination of Rubin and Vera, which were designed to work together, will be able to deliver five times greater artificial-intelligence computing performance compared to previous models.
[5]
Nvidia CEO Huang says next generation of chips is in full production - The Economic Times
Nvidia CEO Jensen Huang said on Monday that the company's next generation of chips is in "full production," saying they can deliver five times the artificial-intelligence computing of the company's previous chips when serving up chatbots and other AI apps. In a speech at the Consumer Electronics Show in Las Vegas, the leader of the world's most valuable company revealed new details about its chips, which will arrive later this year and which Nvidia executives told Reuters are already in the company's labs being tested by AI firms, as Nvidia faces increasing competition from rivals as well as its own customers. The Vera Rubin platform, made up of six separate Nvidia chips, is expected to debut later this year, with the flagship server containing 72 of the company's graphics units and 36 of its new central processors. Huang showed how they can be strung together into "pods" with more than 1,000 Rubin chips and said they could improve the efficiency of generating what are known as "tokens" - the fundamental unit of AI systems - by 10 times. To get the new performance results, however, Huang said the Rubin chips use a proprietary kind of data that the company hopes the wider industry will adopt. "This is how we were able to deliver such a gigantic step up in performance, even though we only have 1.6 times the number of transistors," Huang said. While Nvidia still dominates the market for training AI models, it faces far more competition - from traditional rivals such as Advanced Micro Devices as well as customers like Alphabet's Google - in delivering the fruits of those models to hundreds of millions of users of chatbots and other technologies. Much of Huang's speech focused on how well the new chips would work for that task, including adding a new layer of storage technology called "context memory storage" aimed at helping chatbots provide snappier responses to long questions and conversations. Nvidia also touted a new generation of networking switches with a new kind of connection called co-packaged optics. The technology, which is key to linking together thousands of machines into one, competes with offerings from Broadcom and Cisco Systems. Nvidia said that CoreWeave will be among the first to have the new Vera Rubin systems and that it expects Microsoft , Oracle, Amazon and Alphabet to adopt them as well. In other announcements, Huang highlighted new software that can help self-driving cars make decisions about which path to take - and leave a paper trail for engineers to use afterward. Nvidia showed research about software, called Alpamayo, late last year, with Huang saying on Monday it would be released more widely, along with the data used to train it so that automakers can make evaluations. "Not only do we open-source the models, we also open-source the data that we use to train those models, because only in that way can you truly trust how the models came to be," Huang said from a stage in Las Vegas. Last month, Nvidia scooped up talent and chip technology from startup Groq, including executives who were instrumental in helping Alphabet's Google design its own AI chips. While Google is a major Nvidia customer, its own chips have emerged as one of Nvidia's biggest threats as Google works closely with Meta Platforms and others to chip away at Nvidia's AI stronghold. During a question-and-answer session with financial analysts after his speech, Huang said the Groq deal "won't affect our core business" but could result in new products that expand its lineup. At the same time, Nvidia is eager to show that its latest products can outperform older chips like the H200, which US President Donald Trump has allowed to flow to China. Reuters has reported that the chip, which was the predecessor to Nvidia's current "Blackwell" chip, is in high demand in China, which has alarmed China hawks across the US political spectrum. Huang told financial analysts after his keynote that demand is strong for the H200 chips in China, and Chief Financial Officer Colette Kress said Nvidia has applied for licenses to ship the chips to China but was waiting for approvals from the US and other governments to ship them.
[6]
Nvidia CEO Jensen Huang calls AI titan's latest chips 'gigantic step...
Nvidia CEO Jensen Huang said Monday that the company's next generation of chips is in "full production," saying they can deliver five times the artificial-intelligence computing of the company's previous chips when serving up chatbots and other AI apps. In a speech at the Consumer Electronics Show in Las Vegas, the leader of the world's most valuable company revealed new details about its chips, which will arrive later this year and which Nvidia executives told Reuters are already in the company's labs being tested by AI firms, as Nvidia faces increasing competition from rivals as well as its own customers. The Vera Rubin platform, made up of six separate Nvidia chips, is expected to debut later this year, with the flagship device containing 72 of the company's flagship graphics units and 36 of its new central processors. Huang showed how they can be strung together into "pods" with more than 1,000 Rubin chips. To get the new performance results, however, Huang said the Rubin chips use a proprietary kind of data that the company hopes the wider industry will adopt. "This is how we were able to deliver such a gigantic step up in performance, even though we only have 1.6 times the number of transistors," Huang said. While Nvidia still dominates the market for training AI models, it faces far more competition - from traditional rivals such as Advanced Micro Devices as well as customers like Alphabet's Google - in delivering the fruits of those models to hundreds of millions of users of chatbots and other technologies. Much of Huang's speech focused on how well the new chips would work for that task, including adding a new layer of storage technology called "context memory storage" aimed at helping chatbots provide snappier responses to long questions and conversations when being used by millions of users at once. Nvidia also touted a new generation of networking switches with a new kind of connection called co-packaged optics. The technology, which is key to linking together thousands of machines into one, competes with offerings from Broadcom and Cisco Systems. In other announcements, Huang highlighted new software that can help self-driving cars make decisions about which path to take - and leave a paper trail for engineers to use afterward. Nvidia showed research about software, called Alpamayo, late last year, with Huang saying on Monday it would be released more widely, along with the data used to train it so that automakers can make evaluations. "Not only do we open-source the models, we also open-source the data that we use to train those models, because only in that way can you truly trust how the models came to be," Huang said from a stage in Las Vegas. Last month, Nvidia scooped up talent and chip technology from startup Groq, including executives who were instrumental in helping Alphabet's Google design its own AI chips. While Google is a major Nvidia customer, its own chips have emerged as one of Nvidia's biggest threats as Google works closely with Meta Platforms and others to chip away at Nvidia's AI stronghold. At the same time, Nvidia is eager to show that its latest products can outperform older chips like the H200, which President Trump has allowed to flow to China. Reuters has reported that the chip, which was the predecessor to Nvidia's current flagship "Blackwell" chip, is in high demand in China, which has alarmed China hawks across the US political spectrum.
[7]
New generation of Nvidia AI chips promises performance five times faster
Nvidia's acquisition of Groq's technology signals the company's determination to stay at the forefront of AI innovation. Nvidia's next generation of AI chips is now in full production and promises a fivefold increase in AI processing power compared to its predecessors, allowing chatbots and AI applications to run faster and more efficiently. CEO unveiled these advances at the Consumer Electronics Show in , highlighting the upcoming Vera Rubin platform. This platform consists of six chips and offers impressive capabilities, including a flagship chip with 72 graphics units and 36 processor cores. Performance improvements Huang demonstrated how the chips can be interconnected into "pods" containing more than 1,000 Rubin chips, dramatically improving the efficiency of token generation -- the building blocks of AI systems -- by a factor of ten. This remarkable performance gain is attributed to a proprietary data type that hopes will become an industry standard. While currently dominates the AI training market, competition from rivals such as and even customers like Alphabet's Google is intensifying. Huang stressed how effective the new chips are at delivering trained AI models to millions of users via chatbots and other technologies. He also pointed to "context memory," a new storage layer designed to speed up chatbot response times to complex questions and conversations. Networking breakthroughs also unveiled a new generation of network switches with co-packaged optics, a crucial technology for connecting thousands of machines into a unified system, going head-to-head with offerings from and . CoreWeave is expected to be among the first users of the Vera Rubin systems, followed by Microsoft, Oracle, Amazon and Alphabet. Among other things, Huang presented new software designed to help autonomous cars make decisions, leaving a transparent trail that engineers can analyze. He also unveiled the extended version of the software, along with the training data used, which will allow carmakers to carry out independent evaluations and strengthen confidence in the model's development. Expansion Nvidia's recent acquisition of talent and chip technology from startup Groq underscores its ambition to stay ahead in the AI landscape. The deal brings in executives who were involved in developing Google's AI chips, posing a direct challenge to Nvidia's dominant position. Huang assured analysts that the Groq deal would not affect Nvidia's core operations but could lead to new product offerings that would complement its portfolio. Want access to all our articles? Take advantage of our limited-time offer and subscribe here!
[8]
Nvidia says new chips to deliver 5x AI performance boost
Nvidia CEO Jensen Huang said Monday that the company's next generation of chips was in full production, promising to deliver five times the AI computing of earlier models. He was speaking at the annual Consumer Electronics Show in Las Vegas. It comes as the world's most valuable company faces mounting competition from rivals and its own customers. Set to launch later this year, the flagship Vera Rubin platform packs six separate Nvidia chips, with 72 graphic units and 36 new central processors. These can be linked into pods, with over a thousand Rubin chips working together. To achieve these performance gains, Huang said the hardware uses proprietary data formats that he hopes the industry will embrace. "This is completely revolutionary. This is how we were able to deliver such a gigantic step up in performance, even though we only have 1.6 times the number of transistors." The AI chip leader also introduced new features aimed at helping chatbots handle complex conversations with millions of users simultaneously. Huang also showcased new self-driving car software called Alpamayo that he said will be open-sourced. "Not only do we open source the models, we also open source the data that we use to train those models because that in that way, only in that way can you truly trust how the models came to be." The same day saw rival AMD show off its latest AI chips in Las Vegas too. It's selling them to customers including ChatGPT-maker OpenAI.
[9]
Nvidia CEO Huang says next generation of chips is in full production
LAS VEGAS, Jan 5 (Reuters) - Nvidia CEO Jensen Huang said on Monday that the company's next generation of chips is in "full production," saying they can deliver five times the artificial-intelligence computing of the company's previous chips when serving up chatbots and other AI apps. In a speech at the Consumer Electronics Show in Las Vegas, the leader of the world's most valuable company revealed new details about its chips, which will arrive later this year and which Nvidia executives told Reuters are already in the company's labs being tested by AI firms, as Nvidia faces increasing competition from rivals as well as its own customers. The Vera Rubin platform, made up of six separate Nvidia chips, is expected to debut later this year, with the flagship device containing 72 of the company's flagship graphics units and 36 of its new central processors. Huang showed how they can be strung together into "pods" with more than 1,000 Rubin chips. To get the new performance results, however, Huang said the Rubin chips use a proprietary kind of data that the company hopes the wider industry will adopt. "This is how we were able to deliver such a gigantic step up in performance, even though we only have 1.6 times the number of transistors," Huang said. While Nvidia still dominates the market for training AI models, it faces far more competition - from traditional rivals such as Advanced Micro Devices as well as customers like Alphabet's Google - in delivering the fruits of those models to hundreds of millions of users of chatbots and other technologies. Much of Huang's speech focused on how well the new chips would work for that task, including adding a new layer of storage technology called "context memory storage" aimed at helping chatbots provide snappier responses to long questions and conversations when being used by millions of users at once. Nvidia also touted a new generation of networking switches with a new kind of connection called co-packaged optics. The technology, which is key to linking together thousands of machines into one, competes with offerings from Broadcom and Cisco Systems. In other announcements, Huang highlighted new software that can help self-driving cars make decisions about which path to take - and leave a paper trail for engineers to use afterward. Nvidia showed research about software, called Alpamayo, late last year, with Huang saying on Monday it would be released more widely, along with the data used to train it so that automakers can make evaluations. "Not only do we open-source the models, we also open-source the data that we use to train those models, because only in that way can you truly trust how the models came to be," Huang said from a stage in Las Vegas. Last month, Nvidia scooped up talent and chip technology from startup Groq, including executives who were instrumental in helping Alphabet's Google design its own AI chips. While Google is a major Nvidia customer, its own chips have emerged as one of Nvidia's biggest threats as Google works closely with Meta Platforms and others to chip away at Nvidia's AI stronghold. At the same time, Nvidia is eager to show that its latest products can outperform older chips like the H200, which U.S. President Donald Trump has allowed to flow to China. Reuters has reported that the chip, which was the predecessor to Nvidia's current flagship "Blackwell" chip, is in high demand in China, which has alarmed China hawks across the U.S. political spectrum. (Reporting by Stephen Nellis in Las Vegas; Editing by Matthew Lewis)
[10]
Nvidia CEO Huang to take stage at CES in Las Vegas as competition mounts
LAS VEGAS, Jan 5 (Reuters) - Nvidia CEO Jensen Huang is set to give a speech on Monday at the Consumer Electronics Show in Las Vegas, potentially revealing new details about product plans for the world's most valuable listed company as it faces increasing competition from both rivals and its own customers. Less than two weeks ago, the company scooped up talent and chip technology from startup Groq, including executives who were instrumental in helping Alphabet's Google design its own AI chips. While Google is a major Nvidia customer, its own chips have emerged as one of Nvidia's biggest threats as Google works closely with Meta Platforms and others to chip away at Nvidia's AI stronghold. At the same time, Nvidia is eager to show that its latest products can outperform older chips like the H200, which U.S. President Donald Trump has allowed to flow to China. Reuters has reported that the chip, which was the predecessor to Nvidia's current flagship "Blackwell" chip, is in high demand in China, which has alarmed China hawks across the U.S. political spectrum. Huang's speech is scheduled to begin at 4 p.m. EST (2100 GMT). Other key speakers at the annual trade show this year include AMD CEO Lisa Su, the CEO of Finnish health tech company Oura, Tom Hale, and PC maker Lenovo's CEO, Yuanqing Yang. (Reporting by Stephen Nellis in Las Vegas; Editing by Matthew Lewis)
Share
Share
Copy Link
Jensen Huang announced Nvidia's next generation Vera Rubin platform is in full production, delivering five times the AI computing performance of previous chips. The chipmaker faces mounting competition from rivals and its own customers while navigating strong demand from China for H200 chips amid ongoing export license considerations.
Nvidia CEO Jensen Huang revealed at CES Las Vegas that the company's next generation data center processors, the Vera Rubin platform, are in full production and on track for deployment in the second half of 2026
1
3
. The advancement of artificial intelligence has pushed Nvidia to deliver AI chips that are 3.5 times better at training and five times better at running AI software than Blackwell, its predecessor1
. The flagship server will contain 72 graphics units and 36 new central processors, with systems capable of being strung together into pods containing more than 1,000 Rubin chips3
5
. These configurations could improve the efficiency of generating tokens—the fundamental unit of AI systems—by 10 times5
.Source: Market Screener
Jensen Huang emphasized that AI computing power requirements are experiencing explosive growth, with demand for Nvidia GPUs increasing dramatically as models scale up by a factor of 10 annually
4
. "The amount of computation necessary for AI is skyrocketing. The demand for Nvidia GPUs is skyrocketing," Huang stated, describing an "intense race" to reach the next frontier of technology4
. The growing complexity and uptake of artificial intelligence software is placing strain on existing computer resources, creating the need for substantially more capacity1
. Nvidia emphasized that Rubin-based systems will be cheaper to operate than Blackwell versions because they'll return the same results using smaller numbers of components1
. Microsoft, Oracle, Amazon, and Google are expected to be among the first data center operators to deploy the new hardware in the second half of 20261
5
.
Source: Bloomberg
Nvidia faces strong China demand for H200 chips, with the Trump administration considering whether to approve license applications for shipments to the Asian nation
1
2
. Chief Financial Officer Colette Kress confirmed that license applications have been submitted and the government is deciding what it wants to do with them1
. The H200, predecessor to the current Blackwell chip, is in high demand in China, which has alarmed China hawks across the US political spectrum2
5
. Regardless of the level of license approval, Kress said Nvidia has enough supply to serve customers in the Asian nation without impacting the company's ability to ship to customers elsewhere in the world1
. The situation remains complex as Nvidia would also need China's government to allow companies in the country to purchase and use the American products, with Beijing previously discouraging government agencies and companies from using an earlier design called H201
.Related Stories
Nvidia confronts mounting pressure from both traditional rivals and its own customers in the AI accelerator market. Advanced Micro Devices and customers like Google are developing their own chips to challenge Nvidia's market leadership
2
3
. Google works closely with Meta Platforms and others to chip away at Nvidia's AI stronghold5
. Less than two weeks before the CES announcement, Nvidia acquired talent and chip technology from startup Groq, including executives who were instrumental in helping Google design its own AI chips2
5
. Huang told financial analysts the Groq deal "won't affect our core business" but could result in new products that expand its lineup5
. While Nvidia still dominates the market for AI training, it faces far more competition in delivering the fruits of those models to hundreds of millions of users of chatbots and other technologies3
. Nvidia also touted a new generation of networking switches with co-packaged optics technology, competing with offerings from Broadcom and Cisco Systems3
5
.Nvidia is pushing software and hardware aimed at broadening the adoption of generative AI across the economy, including robotics, health care, and heavy industry
1
. Huang highlighted new software called Alpamayo that can help self-driving cars make decisions about which path to take and leave a paper trail for engineers to use afterward3
5
. The company will open-source both the models and the data used to train them so automakers can make evaluations and truly trust how the models came to be3
. Much of Huang's speech focused on how well the new chips would work for serving chatbots and other AI applications to end users, including adding a new layer of storage technology called context memory storage aimed at helping chatbots provide snappier responses to long questions and conversations3
5
. For now, the majority of spending on Nvidia-based computers comes from the capital expenditure budgets of a handful of customers, including Microsoft, Google Cloud, and Amazon AWS1
.
Source: New York Post
Summarized by
Navi
[3]
[4]
07 Jan 2025•Technology

07 Jan 2025•Technology

08 Oct 2025•Technology

1
Policy and Regulation

2
Technology
3
Technology
