50 Sources
50 Sources
[1]
Google TPUs garner attention as AI chip alternative, but are only a minor threat to Nvidia's dominance -- Alphabet's biggest challenge is widespread adoption
Meta's reported deal with Google shows a growing interest in alternative AI hardware, but Nvidia says its platform remains unmatched. Nvidia has broken its silence following reports that Meta is in advanced discussions to spend billions of dollars on Google's custom Tensor Processing Units (TPUs), a move that would mark a rare shift in the company's AI infrastructure strategy. Nvidia, which saw its stock dip last week as Alphabet's rose, issued a pointed statement in response on Tuesday. "We're delighted by Google's success -- they've made great advances in AI and we continue to supply to Google," Nvidia wrote. "NVIDIA is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done. NVIDIA offers greater performance, versatility, and fungibility than ASICs, which are designed for specific AI frameworks or functions." The response highlights Nvidia's awareness of what's at stake. While Meta's reported plan involves an initial rental phase and phased purchases starting in 2027, any serious pivot away from Nvidia hardware would reverberate throughout the AI ecosystem. Google's TPU architecture, once used solely in-house, is now part of an aggressive bid to capture hyperscaler business from Nvidia's dominant platform. Google's TPUs are application-specific chips, tuned for high-throughput matrix operations central to large language model training and inference. The current-generation TPU v5p features 95 gigabytes of HBM3 memory and a bfloat16 peak throughput of more than 450 TFLOPS per chip. TPU v5p pods can contain nearly 9,000 chips and are designed to scale efficiently inside Google Cloud's infrastructure. Crucially, Google owns the TPU architecture, instruction set, and software stack. Broadcom acts as Google's silicon implementation partner, converting Google's architecture into a manufacturable ASIC layout. Broadcom also supplies high-speed SerDes, power management, packaging, and handles post-fabrication testing. Chip fabrication is performed by TSMC itself. By contrast, Nvidia's Hopper-based H100 GPU includes 80 billion transistors, 80 gigabytes of HBM3 memory, and delivers up to 4 PFLOPS of AI performance using FP8 precision. Its successor, the Blackwell-based GB200, increases HBM capacity to 192 gigabytes and peak throughput to around 20 PFLOPS. It's also designed to work seamlessly in tandem with Grace CPUs in hybrid configurations, expanding Nvidia's presence in both the cloud and emerging local compute nodes. TPUs are programmed via Google's XLA compiler stack, which serves as the backend for frameworks like JAX and TensorFlow. While the XLA-based approach offers performance portability across CPU, GPU, and TPU targets, it typically requires model developers to adopt specific libraries and compilation patterns tailored to Google's runtime environment. By contrast, Nvidia's stack is broader and more deeply embedded in industry workflows. CUDA, cuDNN, TensorRT, and related developer tools form the default substrate for large-scale AI development and deployment. This tooling spans model optimization, distributed training, mixed-precision scheduling, and low-latency inference, all backed by a mature ecosystem of frameworks, pretrained models, and commercial support. As a result, moving from CUDA to XLA is no trivial task. Developers must rewrite or re-tune code, manage different performance bottlenecks, and in some cases adopt entirely new frameworks. Meta has internal JAX development and is better positioned than most to experiment, but friction remains a gating factor for wider TPU adoption. According to Reuters, some Google Cloud executives believe the Meta deal could generate revenue equal to as much as 10% of Nvidia's current annual data center business. That is, of course, a speculative figure, but Google has already committed to delivering as many as one million TPUs to Anthropic and is pushing its XLA and JAX stack hard among AI startups looking for alternatives to CUDA. Still, Google's chips are single-purpose. TPUs do one thing and do it well, but with limits. They're not suited for HPC simulations, general-purpose scientific computing, or any workload that requires flexible execution models or broad kernel support. TPU workloads run only in Google Cloud, while Nvidia chips run across clouds, on-prem systems, local workstations, and edge devices. That flexibility is pretty central to Nvidia's case. Hyperscalers like Meta are not new to custom silicon; AWS developed Trainium, Microsoft has Maia, and Google's own TPU efforts date back nearly a decade. What's new is the suggestion that another hyperscaler might shift some training off of Nvidia's platform. Even if only partial, it highlights a desire for second-source resilience and bargaining power. Nvidia's Grace Blackwell architecture will serve to make that kind of migration harder. By coupling Blackwell GPUs with Grace CPUs over a high-speed interconnect, Nvidia enables unified memory access and simplified training and inference workflows. Developers can train on GPU clusters in the cloud and serve models at the edge or in enterprise environments without changing code or retraining. At the same time, Nvidia is moving deeper into vertical markets where TPUs don't compete. It has partnerships across automotive, robotics, manufacturing, and retail. From Jetson edge modules to DGX supercomputers, Nvidia is positioning its stack as the default execution environment for AI inference everywhere, not just for training large models. By exploring alternatives now, Meta potentially gains leverage in future hardware negotiations and insurance against vendor lock-in. Even if Google's TPUs don't replace H100s wholesale -- which they likely won't -- they could take on selected inference tasks or serve as overflow capacity in peak cycles, especially if the economics are favorable. At this stage, Meta's TPU adoption looks like little more than minor diversification. Nvidia continues to power the largest and most visible AI workloads in the industry. The company's combination of software tooling, developer lock-in, and general-purpose capability gives it a lead that TPUs can't erase overnight. Google's biggest challenge in all of this is going to be gaining ground from Nvidia. Meta's participation would give TPUs credibility beyond Google Cloud and Anthropic, but scale is only part of the equation. Whether TPUs can meet the needs of complex, evolving AI workflows outside of a tightly controlled environment -- and whether more hyperscalers are willing to put their faith in a platform that lives largely within one company's walled garden -- still remains to be seen.
[2]
Nvidia scoffs at threat from Google TPUs
Embracing the Chocolate Factory's tensor processing units would be easier said than done for The Social Network Growing demand for Google's homegrown AI accelerators appears to have gotten under Nvidia's skin amid reports that one of the GPU giant's most loyal customers may adopt the Chocolate Factory's tensor processing units (TPUs). Nvidia's share price dipped on Tuesday following a report by The Information that Meta was in talks to deploy Google's TPUs in its own datacenters beginning in 2027. In response, Nvidia took to the social network formerly known as Twitter, where it offered Google a backhanded compliment on the successes of its TPUs. "We're delighted by Google's success -- they've made great advances in AI and we continue to supply to Google," Nvidia's Newsroom account posted on X. "Nvidia is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done. Nvidia offers greater performance, versatility, and fungibility than ASICs, which are designed for specific AI frameworks or functions." As we've previously reported, Google's seventh generation of TPUs, codenamed Ironwood, not only gives Nvidia Blackwell accelerators a run for their money, but can also scale far beyond the GPU giant's 72-GPU racks to pods containing anywhere from 256 to 9,216 chips. Nvidia's next-gen Vera Rubin accelerators are faster, but Google has scale on its side. "We are experiencing accelerating demand for both our custom TPUs and Nvidia GPUs; we are committed to supporting both, as we have for years," Google told El Reg in a statement, conveniently avoiding the subject of Meta. Wider adoption of Google's TPUs may pose a threat to Nvidia's bottom line on paper, but it's not clear whether Meta would - or even could - choose them over competing platforms. For one, Google would need to break from convention and offer its TPUs for sale on the open market. Historically, the accelerators have only ever been available for lease through Google Cloud. But even if Google did agree to sell its chips to Meta, Zuckercorp would still face significant integration challenges. TPU deployments don't look anything remotely like the AMD- and Nvidia-based clusters Meta is used to. Rather than using packet switches to stitch together hundreds or thousands of AMD or Nvidia GPUs into large scale-out compute fabrics, TPUs are connected into large toroidal meshes using optical circuit switch (OCS) tech. We've discussed the significance of OCS in the past, but the important bit is these appliances operate on completely different principles from packet switches, and often require a different programming model. The bigger challenge, however, is PyTorch, the deep learning library Meta developed to enable machine learning workloads to run seamlessly across CPU and GPU hardware. PyTorch can run on TPUs, but they don't support the framework natively, meaning Meta would need to employ a translation layer called PyTorch/XLA. Given the army of software devs at big tech's disposal, Meta and Google could certainly overcome this challenge. But why would they care to? If the talks did take place as reported, the more likely scenario is that Meta was simply discussing inference optimizations targeting Google TPUs for its family of Llama models. Running inference on a model requires an order of magnitude fewer compute resources than training one. Inference workloads also benefit from proximity to end users, which cuts down on latency and improves interactivity. Historically, Meta has released its family of large language models (LLMs) to the public on repositories like Hugging Face, where customers can download and run them on any number of accelerators, including Google's TPUs. So Meta needs to ensure Llama runs well on TPUs so enterprises will adopt it. But if mere inference is the goal, Meta doesn't need to own the chips itself - enterprises can simply run Llama on a TPU leased directly from Google. Having said all of that, Google is indeed seeing greater interest in its TPU tech from rival model builders, including Anthropic. The Claude developer has been heavily reliant on custom Trainium AI accelerators from Amazon Web Services, but it's diversifying. In October, Anthropic announced plans to use up to a million TPUs to train and serve its next generation of Claude models. This is a lot less jarring than moving from GPUs - as we reported earlier this month, both Google's TPU and Amazon's Trainium use mesh topologies in their compute clusters, lowering the transition cost. But Anthropic didn't stop there. Last week, Anthropic announced a strategic partnership with Microsoft and Nvidia to purchase $30 billion worth of Azure compute capacity, and to contract additional compute capacity of up to one gigawatt. In exchange, Nvidia and Microsoft agreed to invest up to $10 billion and $5 billion respectively in the AI startup. In other words, all the big AI players are hedging their bets and making alliances with everybody else. The Register reached out to Meta for comment, but had not heard back at the time of publication. ®
[3]
Nvidia shares tumble on signs Google gaining upper hand in AI
Nvidia shares tumbled on Tuesday on widening fears that Google is gaining ground in artificial intelligence, erasing nearly $250bn in market value from the AI chipmaker. The sell-off saw Nvidia's shares fall 5.3 per cent, marking the company's biggest intraday retreat since April. Nvidia's slide rippled through the broader market, with the tech-heavy Nasdaq Composite down 0.2 per cent in morning trading. Server maker Super Micro Computer, which is a key partner to Nvidia, fell 3.1 per cent, while software group Oracle, which has committed to spending billions of dollars on the chipmaker's high-performance systems, lost 3.4 per cent. Shares in data centre operator CoreWeave, in which Nvidia owns a 6 per cent stake, fell 4.7 per cent, alongside its AI cloud rival Nebius, which was down 3.6 per cent. Investors blamed the declines on excitement surrounding Alphabet's own AI-specialised chips, known as tensor processing units. Google last week released Gemini 3, its latest large language model, which is considered to have leapfrogged OpenAI's ChatGPT. Google's model was trained using TPUs rather than the Nvidia chips that power OpenAI's systems. The release of Gemini 3 "may prove to be a subtler but more important version of the DeepSeek disruption", said Mike O'Rourke at Jones Trading, referring to the Chinese AI start-up whose emergence in January triggered a sharp sell-off for US tech groups including Nvidia. "The market is embracing the view that Google is the clear-cut AI leader," O'Rourke added. Charlie McElligott, a strategist at Nomura, also likened Gemini 3's impact to the DeepSeek shock. Alphabet's latest model has "reset" the "AI hierarchy chess board" and pulled the market into a "new DeepSeek moment", he said in a note to clients. A report in The Information late on Monday suggested that Google was pitching potential clients including Meta on using TPUs in their own data centres rather than Nvidia's chips. Google's TPUs have until now only been available for customers to rent through its cloud computing service. Meta, like OpenAI, is one of Nvidia's biggest customers. Alphabet shares rose 1.3 per cent on Tuesday to a fresh record-high, pushing it close to a $4tn market capitalisation for the first time. Nvidia has now lost more than $800bn in market value since it peaked just above $5tn less than a month ago. AMD, Nvidia's main rival in AI-focused chips, also fell 7.6 per cent on Tuesday.
[4]
Billion-dollar AI chip deal between Google and Meta could be on the cards -- would involve renting Google Cloud TPUs next year, outright purchases in 2027
The deal is said to be worth billions to both firms, and has already helped boost Google's parent company to a near $4 trillion valuation Meta may be on the cusp of spending billions on Google AI chips to power its future developments, as the social-media giant is reportedly in talks to both buy and rent Google compute power for its future AI endeavours, as reported by The Information, via Reuters. The ongoing negotiations reportedly involve Meta renting Google Cloud Tensor Processing Units (TPU) in 2026, before purchasing them outright in 2027. This news shows continuing collaboration between the companies, despite a recent pause on their undersea cable projects. To date, Google has mostly leveraged its TPUs for its internal efforts, so this move, if it comes to fruition, would be a change of tactic that could help it capture a sizeable portion of the AI chip business. Considering that few, if any, companies have figured out how to turn a profit from developing AI just yet, Google may be looking to get in on Nvidia's act. The long-time GPU maker has made untold billions since the start of the AI craze, propelling it to become the world's most valuable company within a short timeframe. Indeed, Reuters reports some Google Cloud executives believe that the shifting strategy would give it the chance to capture as much as a 10% slice of Nvidia's data center revenue. Considering Nvidia made over $51 billion from data centers in Q2 2025 alone, Google cornering that much of Nvidia's revenue would be worth 10s of billions of dollars. Markets reacted to the rumors of this deal, sending Meta and Google stock upwards. Alphabet rose several percent in pre-market trading, and Reuters has it on track to become the next $4 trillion company potentially as soon as later today. Meta stock prices are up, too, but Nvidia took a 3% hit. Even if Google does clinch this deal and secures a huge order and long-term revenue stream for its TPUs outside of internal use, it's still going to be swallowed up by the AI industry as a whole. There isn't enough compute power, fabrication capacity, or supply-chain logistics to provide the enormous uptick in demand for AI data center buildouts that have been ongoing this year. Memory prices are skyrocketing, GPU prices are expected to jump up next year, and just about everything electronic could be more expensive this time next year. That's if the bubble doesn't burst, of course. Even 2026 feels a long way off when it comes to this ever-changing industry, but 2027 is a lifetime away. Who knows what the state of AI hardware will be like then, and there's no telling whether Google's TPUs will have any longer shelf life than Nvidia's top GPUs. Especially with an aggressive annual release schedule.
[5]
Nvidia shares hit by report on new AI chip competition. How worried should investors be?
Competition for Nvidia's crown in the artificial intelligence is ramping up, but analysts aren't too worried -- for now. Nvidia has been battered this month, losing more than 13%, as investors grow weary of elevated tech valuations. If that loss stands, it would mark the stock's biggest monthly pullback since September 2022, when it plunged 20%. Shares fell another 4% on Tuesday after The Information reported, citing sources, that Meta Platforms was considering using Alphabet's tensor processing units (TPUs) for its data centers. This comes as hyperscalers begin exploring alternatives to Nvidia's graphics processing units (GPUs) . TPUs are application-specific chips, or ASICs, and often more power efficient compared to GPUs, which are general-purpose chips designed for a broader range of compute workloads. Analysts aren't taking the news as an immediate hit to Nvidia, as the chipmaker is still the market leader with its GPUs. NVDA AVGO 1Y mountain Nvidia and Broadcom stock performances over the past year. Broadcom has jumped 13% this week, bringing its year-to-date gains to nearly 66%. That's well above Nvidia's 30% advance for 2025. "GPUs are clearly not going anywhere," Bernstein analyst Stacy Rasgon wrote Tuesday. "Right now the overarching theme is of compute scarcity, and if anything this feels like an effort to secure more." "To that end, we still think the question of 'ASIC or GPU' kind of misses the point. Right now the real question should really be 'is the opportunity in front of us still big, or is it not?' as (hopefully!) we are not yet in a mature, saturated market for AI hardware. In other words, it's still the size of the pie that matters; if it's big both GPU and ASIC should thrive (and if it's not, they're both in trouble)," he said. However, this competition could be boon for longtime ASICs supplier Broadcom , which helps design and manufacture Google's TPUs. Mizuho highlighted Broadcom as the key beneficiary of Google's potential TPU offerings to Meta. Analyst Vijay Ramesh reiterated the stock as a top pick, though he remains positive on Nvidia given strength in its Blackwell and Rubin pipeline. "We estimate META remains a large customer for NVDA but potentially bigger customer for AMD Instinct. A move to TPUs is +ve for AVGO, and could be a modest challenge for GPU suppliers," Ramesh wrote in a Tuesday note. "We continue to see AVGO and NVDA as the 2 key players in the AI space." Bank of America analyst Vivek Arya is also bullish on both Nvidia and Broadcom as well. The firm believes that TPUs are intensifying competition but that the AI data center market is still in early growth stages. He expects the total addressable market to grow about five times to more than $1.2 trillion by the 2030 compared to $242 billion by the end of this year. "NVDA is trading at ~25x market multiple, essentially valuing the company as another run of the mill franchise, which we disagree with," Arya said in a Tuesday note. But "AVGO certainly has the upper-hand - we expect ~100%+ YoY AI sales growth in CY26 due to additional TPU and Anthropic projects - with the 38x CY26 PE (highest spread versus NVDA) reflecting the justified premium. Note if Google licenses more TPUs directly it might cut into AVGO's direct TAM developing ASICs for other customers."
[6]
Nvidia plays down competition fears over Google's AI chips
Nvidia has claimed it is "a generation ahead" of rivals in the artificial intelligence (AI) industry amid growing suggestions a rival may emerge to threaten to its market dominance - and multi-trillion dollar valuation. Shares in the chip giant fell on Tuesday, following a report Meta planned to spend billions on AI chips developed by Google to power its data centres. In a statement on X, Nvidia, the world's most valuable company, said it was the only platform which "runs every AI model and does it everywhere computing is done". In response, Google said it was committed to "supporting both" its own and Nvidia's chips.
[7]
NVIDIA's clout faces mounting threat as Meta 'explores' Google TPUs
According to The Information, Meta is in talks to spend billions of dollars on Google's tensor processing units (TPUs) beginning 2027, a move that would cast Alphabet as a serious challenger to Nvidia's dominance. The discussions also include the possibility of Meta renting Google Cloud's TPUs as soon as next year. Reuters, citing sources, noted that such a deal would mark a major strategic departure for Google, which has so far deployed its TPUs only inside its own data centers. The move comes as AI workloads surge and companies search for alternatives to Nvidia's supply-constrained, and high-priced, GPUs. If sealed, the Meta-Google partnership would significantly widen the market for Google's custom silicon and intensify the race for data-center AI processors. Some Google Cloud executives believe the strategy could help the company capture roughly 10% of Nvidia's annual revenue, Reuters reported. The speculation sent shockwaves across the market. Alphabet rose more than 4% in premarket trade on Tuesday, placing it on track for a $4 trillion valuation, while Broadcom gained 2% and Nvidia shares dropped more than 3%.
[8]
Gemini 3 -- and the custom chips that power it -- is a wake up call for AI investors
The launch of Google's Gemini 3 has the entire investing world rethinking the artificial intelligence landscape. The new reasoning model not only leapfrogged the latest from ChatGPT juggernaut OpenAI, the still-private company driving so much of the massive AI spending out there, but was also trained entirely on Google's custom chips called tensor processing units (TPUs), co-designed by Broadcom . In a new post from The Information, the tech outlet said that Meta Platforms is thinking about using Google's TPUs for its data centers in 2027. The report fuels the debate about whether custom silicon is going to take a bite out of Nvidia's graphics processing units business. Club stock Nvidia sank to nearly three-month lows on Tuesday. Nvidia put out a statement on X , saying, "We're delighted by Google's success -- they've made great advances in AI, and we continue to supply to Google." But the post continued, "Nvidia is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done." Jim Cramer, who views the recent Nvidia stock drop as a buying opportunity , said Tuesday that Meta or any other tech companies shopping around on AI chips won't lower the price of Nvidia GPUs, which are considered the gold standard in all-purpose chips to run AI workloads. "The demand is insatiable for Nvidia," Jim said, pointing to last week's solid earnings and rosy guidance . The real winners here are Meta and Broadcom, which are also Club holdings. Jim said the idea of using less expensive TPUs gives Meta a chance to show that it's not going to just spend like a drunken sailor, which was basically what slammed the stock the day after the company boosted its already massive spending guidance. For Broadcom, Jim said it is another feather in the cap of CEO Hock Tan, who is also on Meta's board. So, if there is truth to The Information story, that might be the connection. Broadcom and Nvidia have been top performers for the portfolio in 2025, up more than 60% and 30%, respectively. Meta, also a Club stock, has been up and down, and up only about 7.5% year to date. AVGO NVDA,META YTD mountain Broadcom, Nvidia, and Meta YTD The advent of Gemini 3 and its reliance on TPUs also raises a question about what Gemini 3 means for OpenAI and its growth trajectory -- not to mention its financial commitments? After all, so much of what's going on with AI nowadays has the ChatGPT creator right at the center of it all. While not a public company yet with earnings reported quarterly, it's safe to assume that OpenAI doesn't currently make enough money to justify its $500 billion valuation, nor its level of announced spending plans. It's the momentum of user adoption and, more importantly, the sustainability of the momentum that could, if anything, justify OpenAI's spending intentions. If it were to lose its lead, then OpenAI's perceived growth path would bear greater scrutiny. ChatGPT has been trained on Nvidia chips. Alphabet's Google designed its TPUs with the help of Broadcom. Even before Gemini 3 was released last week, Alphabet stock had been soaring. On Monday, it surged another 6%, extending its year-to-date gains to nearly 70%. Alphabet stock was up again Tuesday, knocking on the door of a $4 trillion market cap. While some believe that the answers to these questions are that Google/Broadcom are now winning at the expense of Nvidia/OpenAI, and that the future is now all about custom silicon, we say, not so fast. First off, it's way too early to make a call that the battle of AI reasoning models will play out like the search wars, with the winner taking all. The idea that there will only be one model to rule them all, like Google Search has done for more than two decades, is not where we see this going. Not for the hardware, nor for the software or LLMs that run on it. We still think this could all play out in such a way that certain models are better suited to certain tasks. That could mean Gemini for coding and research, Meta AI for more social or creative tasks, Anthropic and Microsoft playing for the enterprise space, and so on. Since we're still in the early days of AI, the leading model at any given time still must fight to stay on top. For example, when OpenAI's ChatGPT launched in late 2022 and quickly went viral, Google hastily and disastrously stood up Gemini. But here we are three years later, and Gemini 3 catapulted Google to the top of the heap as far as capabilities. ChatGPT is, however, enjoying its first mover advantage, reporting early last month over 800 million weekly active users. Google said last week that Gemini has over 650 million monthly active users. Second, just because Gemini doesn't rely on Nvidia graphics processing units (GPUs) doesn't mean that Nvidia hardware is suddenly less relevant. Custom semiconductors are nothing new. While they can bring financial cost advantages, that advantage does come at a cost to develop, update, and manufacture the chips. Plus, investors must stay mindful that while Gemini may not rely on Nvidia hardware, Google Cloud services do. TPUs are a type of application-specific integrated circuit (ASIC), meaning that these chips are suited to a particular type of task or application. That's all well and good for internal projects, like the advancement of large language models (LLMs), that will underly much of Google's own services, such as Search, YouTube, or Waymo. However, TPUs are not as attractive when the aim is to rent compute out to customers, which is what Google does as the world's third-biggest cloud behind Amazon and Microsoft. For renting cloud compute, Nvidia's GPUs are the undisputed champions, as they work with Nvidia's CUDA software platform, which AI researchers have been working with for years. GPUs are flexible, widely available, and already broadly adopted and familiar to developers around the world. If a customer were to develop strictly on TPUs, they might realize a cost benefit. However, to do so, it would require giving up CUDA to develop on Google's specific software stack, a stack that doesn't translate to GPUs or likely even other custom chips that might be offered by other companies. To be sure, for the biggest LLM companies out there, it may make sense to develop a TPU version alongside a GPU one, if the volume of business warrants it. We're monitoring The Information report about Meta, but we are a bit skeptical. For starters, we already know that Meta is working with Broadcom on its own custom chips, so the idea of buying Alphabet's custom silicon, instead of utilizing the one it has been working with Broadcom to optimize for their own workloads, is a bit odd. Alphabet is also Meta's main rival in digital advertising, so the idea that it's going to start shifting to Alphabet as a key supplier, be it for a hardware or a software stack, seems a bit risky. Nonetheless, the race to build out accelerated AI infrastructure has resulted in the formation of plenty of frenemy relationships, so we certainly are not dismissive of the news. However, developing TPU versions of software, alongside GPU-based versions, is not going to be the case for most companies. Even if a company's stated goal was to diversify beyond the Nvidia ecosystem, locking itself into another, even more specific software and hardware stack like Google's TPU environment, isn't a smart way to go about it. In addition to having to rework years of development written in CUDA and realize the cost benefits of that effort, a company would also be giving up the ability to move to another cloud provider or even bring workloads in-house. Google's TPUs aren't available on AWS or Microsoft's Azure clouds, or on neoclouds like CoreWeave, nor can they be purchased outright if a company opts to build its own infrastructure. While The Information report does suggest that Google may consider doing just that, it's not clear when or to what extent it will sell chips to third parties for use in their own data centers -- will it be reserved for large buyers, or open to buyers of all sorts to more directly compete with Nvidia, time will tell and we will conintue to monitor for further details. What Gemini 3 does indicate is that there are other ways to go about developing a leading LLM that can be run more cheaply than those based on Nvidia hardware. However, it requires years of work and billions of dollars of investment to develop both the hardware and software necessary to do so. Additionally, what a company like Google develops for internal use to reduce costs may not be as attractive to customers who don't want to be locked in. The strategy only works for companies doing so much volume internally that the benefit of a financial cost reduction is worth the loss of flexibility that Nvidia's GPUs provide. Only a handful of companies in the world have that scale - and fortunately for Nvidia, most of those companies make more money renting out GPU-based compute. In the end, we're back to where we started, believing that custom silicon does make a lot of sense for the big players, which is one key reason we took a position in Broadcom to begin with. But we know that Nvidia's GPUs have far more reach thanks to their flexibility to operate many different types of workloads and a long history, which has resulted in broad-based adoption, portability from one cloud or on-premises infrastructure to another, and the largest software library around. Additionally, when we consider sovereign AI spend, these nation-state buyers are going to be far more interested in a more flexible, open ecosystem like the one Nvidia provides that lets buyers write their own code with more control, versus a more specialized closed ecosystem that puts them more at the mercy of a U.S. company. Consider that Google isn't even allowed in China, so are Chinese buyers really going to demand Google TPUs, especially if President Donald Trump authorizes Nvidia's H200 chips for sale into China? Cost savings are important, but from the perspective of a sovereign entity, national security is the priority. The introduction of AI agents may also change some of these dynamics, as it may become easier to switch from one infrastructure to another if AI agents can be deployed to, say, convert CUDA-based programs to something that will run on a TPU. However, for the time being, we don't think the introduction of Gemini 3 to be enough to derail the demand Nvidia spoke about, or put on hold that vast number of deals it has made in recent months. While some may argue that the idea of renting out compute (infrastructure-as-a-service) will become less relevant as companies like Alphabet instead turn to selling their application programming interface (API) in a move toward a model-as-a-service (MaaS) business model. It's a trend we expect to hear more about in a post-Gemini 3 world. However, we're not at the point of it altering our investment thesis on Nvidia or the broader AI cohort at the moment. Nonetheless, investors would be remiss not to acknowledge and keep in mind this effort to move away from Nvidia chips, in certain instances, and the effort by Alphabet to potentially move beyond an IaaS model altogether to a new MaaS business model, though even in that scenario, the world wouldn't need less compute, the end customer may just simply be a bit less picky about the hardware their applications are being run on, as the move to an MaaS model would allow the API provider to choose the hardware based on cost. While mindful of the evolving playing field, we see no major change to our view of the AI space. We still think Nvidia is a must-own name and that Broadcom is the way to play the custom silicon space. However, the introduction of Gemini 3 should wake investors up to these changes happening under the surface, and the potential risks they may bring, in different ways, to the juggernauts driving AI innovation. (Jim Cramer's Charitable Trust is long NVDA, AVGO, AMZN, META, MSFT. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust's portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
[9]
Wedbush: Nvidia still AI champ as Google ramps up chip push
That's a bigger shift than it sounds. For years, Google has kept TPUs tucked inside its ecosystem, a kind of proprietary advantage it rarely tried to commercialize. Opening them up for deployment on someone else's infrastructure makes it look like Google is behaving like a chip supplier, not just a cloud vendor. Google parent Alphabet's stock perked up on the news. Nvidia's slipped. Chip stocks tied to the AI buildout followed the move because no one wants to be late if the pecking order changes. For a moment, traders tried on the idea that Google might finally be a real contender in the most expensive hardware race on earth. But while Google's move may be bold, Ives wants to remind everyone that Nvidia -- his "tech and AI Rock of Gibraltar" -- still runs the ring.
[10]
Nvidia is so spooked by Google's sudden AI comeback that it's posting on X to defend itself | Fortune
The defensive move came after Nvidia stock fell over 2.5% on the news, and near the close, while shares of Alphabet -- buoyed by its well-reviewed new Gemini 3 model, which was acclaimed by well-known techies such as SalesforceCEO Marc Benioff -- climbed for a third day in a row. The catalyst was a report from The Information claiming that Google has been pitching its AI chips, known as TPUs, to outside companies including Meta and several major financial institutions. Google already rents those chips to customers through its cloud service, but expanding TPU use into customers' own data centers would mark a major escalation of its rivalry with Nvidia. That was enough to rattle Wall Street, and also Nvidia itself. "We're delighted by Google's success -- they've made great advances in AI and we continue to supply to Google," Nvidia wrote in a post on X. "NVIDIA is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done." It's not hard to read in between the lines. Google's TPUs might be gaining traction, but Nvidia wants investors, and its customers, to know that it still sees itself as unstoppable. Brian Kersmanc, a bearish portfolio manager at GQG Partners, had predicted this moment. In an interview with Fortune late last week, he warned that the industry was beginning to recognize Google's chips as a viable alternative. "Something I think was very understated in the media, which is fascinating, but Alphabet, Google's Gemini three model, they said that they use their own TPUs to train that model," Kersmanc said. "So the Nvidia argument is that they're on all platforms, while arguably the most successful AI company now, which is Gemini, didn't even use GPUs to train their latest model." For most of the past decade, Google's AI chips were treated as a clever in-house tool: fast, efficient, and tightly integrated with Google's own systems, but not a true threat to Nvidia's general-purpose GPUs, which monopolize more than 90% of the AI accelerator market. Part of that is architectural. TPUs are ASICs, custom chips optimized for a narrow set of workloads. Nvidia, in its X post, made sure to underline the contrast. "NVIDIA offers greater performance, versatility, and fungibility than ASICs," the company said, positioning its GPUs as the universal option that can train and run any model across cloud, on-premise, and edge environments. Nvidia also pointed to its latest Blackwell architecture, which it insists remains a generation ahead of the field. But the past month has changed the tone. Google's Gemini 3 -- trained entirely on TPUs -- has drawn strong reviews and is being framed by some as a true peer to OpenAI's top models. And the idea that Meta could deploy TPUs directly inside its data centers -- reducing reliance on Nvidia GPUs in parts of its stack -- signals a potential shift that investors have long wondered about but hadn't seen materialize. The defensive posture wasn't limited to Google. Behind the scenes, Nvidia has also been quietly fighting another front: a growing feud with Michael Burry, the investor famous for predicting the 2008 housing collapse and a central character in Michael Lewis' classic The Big Short. After Burry posted a series of warnings comparing today's AI boom to the dot-com and telecom bubbles -- arguing Nvidia is the Cisco of this cycle, meaning that it similarly supplies the hardware for the buildout but might suffer intensive corrections -- the chipmaker circulated a seven-page memo to Wall Street analysts specifically rebutting his claims. Burry himself revealed the memo on Substack. Burry has accused the company of excessive stock-based compensation, inflated depreciation schedules that make data center buildouts appear more profitable, and enabling "circular financing" in the AI startup ecosystem. Nvidia, in its memo, pushed back line by line. "Nvidia does not resemble historical accounting frauds because Nvidia's underlying business is economically sound, our reporting is complete and transparent, and we care about our reputation for integrity," they said in the memo, on which Barron's was first to report.
[11]
Nvidia says its GPUs are a 'generation ahead' of Google's AI chips
Nvidia on Tuesday said its tech remains a generation ahead of the industry, in response to Wall Street's concerns that the company's dominance of AI infrastructure could be threatened by Google's AI chips. "We're delighted by Google's success -- they've made great advances in AI and we continue to supply to Google," Nvidia said in a post on X. "NVIDIA is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done." The post came after Nvidia saw its shares fall 3% on Tuesday after a report that Meta, one of its key customers, could strike a deal with Google to use its tensor processing units for its data centers. In its post, Nvidia said its chips are more flexible and powerful compared to so-called ASIC chips -- such as Google's TPUs -- which are designed for a single company or function. Nvidia's latest generation of chips are known as Blackwell. "NVIDIA offers greater performance, versatility, and fungibility than ASICs," Nvidia said in its post. Nvidia has over 90% of the market for artificial intelligence chips with its graphics processors, analysts say, but Google's in-house chips have gotten increased attention in recent weeks as a viable alternative to the Blackwell chips, which are expensive but powerful. Unlike Nvidia, Google doesn't sell its TPU chips to other companies, but it uses them for internal tasks and allows companies to rent them through Google Cloud. Earlier this month, Google released Gemini 3, a well-reviewed state-of-the-art AI model that was trained on the company's TPUs, not Nvidia GPUs. "We are experiencing accelerating demand for both our custom TPUs and Nvidia GPUs," a Google spokesperson said in a statement. "We are committed to supporting both, as we have for years." Nvidia CEO Jensen Huang addressed rising TPU competition on an earnings call earlier this month, noting that Google was a customer for his company's GPU chips and that Gemini can run on Nvidia's technology. He also mentioned that he was in touch with Demis Hassabis, the CEO of Google DeepMind. Huang said that Hassabis texted him to say that the tech industry theory that using more chips and data will create more powerful AI models -- often called "scaling laws" by AI developers -- is "intact." Nvidia says that scaling laws will lead to even more demand for the company's chips and systems.
[12]
Nvidia shares dip as AI accelerator race shifts interest to Google
A report that Meta may shift billions in AI spending towards Google's custom chips sent Nvidia shares lower, signalling early signs of a potential shake-up in the company's near-monopoly of the AI hardware market. Nvidia shares dipped in pre-market trading on the NASDAQ on Tuesday after widespread reports that Meta, the company behind Facebook and Instagram, is in talks to spend billions on Google's in-house AI chips, known as tensor processing units (TPUs). Nvidia fell almost 4% to around $175.44 (€152.20), down from its earlier-year highs of up to $212 (€183.76) in late October. Trading was heavy in the after-hours session, with more than 250 million shares changing hands. Nvidia spent most of this year being heralded as the winner of the AI race because its chips, especially the H100 GPU, became the essential hardware powering nearly every major AI model. That gave the company a near-monopoly on the market while also turning it into one of the world's most valuable firms. But Nvidia's chips are general-purpose GPUs, originally built for graphics and then later repurposed for AI, while Google's TPUs are specialised processors designed from the get-go almost exclusively for machine-learning tasks, making them faster and more efficient for certain types of AI work. The market movement could suggest that while Nvidia's experience in chipmaking granted them a lion's share of initial investor interest, now the world's largest buyers of artificial intelligence hardware are weighing alternatives that are specifically tailored to AI. Such a development could have long-term implications for the trillion-dollar chipmaker. Nvidia's chips are famous for their exceptional performance in terms of computer graphics rendering, gaming, video processing and 3D modelling. According to a report from business outlet The Information, Meta is considering deploying Google's TPUs in its data centres from 2027 and may also rent TPU capacity through Google Cloud as early as next year. Google's TPUs are not general-purpose processors and underperform on tasks outside of machine-learning or AI-related tasks, meaning they can't replace CPUs or GPUs for ordinary computing jobs. The share movements therefore suggest that the chips that have long powered our laptops and desktops have become less of a priority for investors. The end of a monopoly? At the heart of the share price drop is the prospect of a challenge to Nvidia's near-monopoly on AI accelerators. Market analysts estimate Nvidia currently holds between 80% and 90% of the market -- some even going up to 95% -- with its H100 and H200 GPUs forming the backbone of global AI training infrastructure. Meta alone said it planned to acquire more than 350,000 H100 chips in a company report last year -- an enormous commitment that reflects both scale and reliance on a single supplier. It seems that Nvidia's GPU's aren't going to be cast aside entirely, but everything boils down to scale in the AI race. The price and performance of Nvidia's chips are not as appealing to hyperscalers -- companies who are seeking to secure a vast, steady supply of chips as they expand into AI. If Nvidia fails produce enough GPUs to meet global demand, hyperscalers want to avoid reliance on a single supplier. TPUs give big buyers a second source of chips, reducing supply risk but also giving them pricing leverage. Even a modest rebalancing of demand from a buyer of Meta's size could therefore shift sentiment across the sector. For Google, the market movements support its long-term push to turn TPUs into a commercial product. Originally built more than ten years ago as application-specific integrated circuits (ASICs) for machine-learning tasks and used only within Google, TPUs are now being sold externally. The deal with Anthropic to provide up to one million TPUs marks a major step and makes them a credible alternative to Nvidia's GPUs for both the training and application of AI models.
[13]
Nvidia shares drop on report Meta may buy TPUs from Google - SiliconANGLE
Shares of Nvidia Corp. closed 2.59% lower today following a report that Meta may buy Google LLC's TPU artificial intelligence chips. Sources told The Information that the processor deal could be worth billions of dollars. It's believed Meta may kick off the partnership next year by leasing TPUs hosted in Google LLC. According to the report, the Facebook parent may deploy the chips in its own data centers from 2027 onwards. Google debuted its newest TPU, Ironwood, in April. The chip (pictured) comprises two dies that host 192 gigabytes of high-speed HBM memory and six custom AI processing modules. Those accelerators are based on two different designs. Each Ironwood chip includes four SparseCores, accelerators optimized to process large embeddings. An embedding is a mathematical structure that stores information for AI models. There are also two TensorCores designed to speed up matrix multiplications, the calculations AI models use to process data. Google deploys Ironwood in liquid-cooled clusters that contain up to 9,216 liquid chips. The company says that a single cluster can provide 42.5 exaflops of performance. One exaflop corresponds to a billion billion calculations per second. It's unclear whether Meta's potential chip deal with Google would see it deploy Ironwood or a different TPU. Given that the Facebook parent is expected to start installing the chips in 2027, it may seek to buy the successor to Ironwood that will likely debut next year. Google could theoretically provide Meta with complete TPU clusters of the kind it uses to power its cloud platform. However, it's possible the social media giant will instead opt to buy only chips and install them in its own systems. Meta uses custom server and rack designs in its data centers. A few years ago, Facebook parent developed a custom inference chip called MTIA. In February, Reuters reported that the company was planning to deploy a new iteration of the processor by the end of 2025. The potential TPU contract with Google suggests the company might be scaling back its plans for MTIA. Alternatively, Meta may be planning to use MTIA for inference and run training workloads on TPUs. Google's shares closed 1.62% higher on the report. Broadcom Inc., which helps the search giant design its TPUs, gained 1.87%. The potential chip deal drew a response from only Wall Street but also Nvidia. "We're delighted by Google's success -- they've made great advances in AI and we continue to supply to Google," the chipmaker wrote in a statement published on X."NVIDIA is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done."
[14]
Meta Considers Google TPUs as an Alternative to NVIDIA Chips | AIM
Meta is considering deploying Google's Tensor Processing Units (TPUs) in its data centres from 2027, a move that could challenge NVIDIA's longstanding dominance in AI hardware, according to The Information. Meta is reportedly in talks to spend billions on TPUs, exploring both long-term deployment and the possibility of renting Google's chips through Google Cloud as early as next year, reported The Information. The discussions come as major AI developers look to diversify suppliers amid soaring demand and concerns over dependence on NVIDIA GPUs, which are the current industry standard for training and running large AI models. Interestingly, Google's latest model Gemini 3, was also trained on TPUs. Alphabet shares rose as much as 2.7% in late trading following the report, while NVIDIA slipped by a similar margin, reflecting investor expectations of a potential shift in market dynamics. If finalised, the Meta-Google arrangement would bolster TPUs as a credible alternative in high-performance AI computing. Google has already signed a separate agreement to provide up to one million TPUs to Anthropic. With Meta's capital expenditure projected to exceed $100 billion in 2026, Bloomberg analysts estimate the company could spend $40-$50 billion next year on inferencing-chip capacity alone, potentially accelerating demand for Google Cloud services. TPUs, designed more than a decade ago specifically for AI workloads, have gained traction as companies evaluate customised, power-efficient alternatives to traditional GPUs. While NVIDIA still commands the vast majority of the AI chip market and AMD remains a distant second, TPUs are emerging as a strong contender, especially as companies seek to mitigate reliance on a single dominant supplier.
[15]
Markets wipe $250 billion off Nvidia as they digest Google's revenge, with Gemini 3 emerging as 'current state-of-the-art' | Fortune
Google's launch of Gemini 3 is rewriting the map of artificial intelligence -- if a few billionaire fans and billions of dollars in market reaction are any indicator. As much as $250 billion was wiped off Nvidia's market cap in Tuesday morning trading as markets digested the reality that maybe, the search empire is striking back in the race to win the AI space. Billionaires Marc Benioff and Mark Cuban offered contrasting opinions that put in perspective what a pivotal moment this Thanksgiving could be in the AI race. Benioff, the Salesforce founder and CEO who is close to the center of the evolving AI story, posted on X.com that he's been using ChatGPT every day for the last three years but he's "not going back" after trying out Gemini 3. "The leap is insane," he wrote, adding that "it feels like the world just changed, again." Cuban, for his part, warned on the Pioneers of AI podcast that the AI race could end up a lot like the search race in the 1990s. "You've got five, six, whatever it is, companies that are trying to create the ultimate foundational model that we all depend on," he said, likening it to the days before Google emerged, when "you didn't know if it was going to be a winner-take-all, or a top five." AI could easily end up the same way, he added, in remarks previously reported by Business Insider. "Now, we know with search engines it's Google ... it's effectively a winner-take-all." Analysts on Wall Street have been saying that there's a new leader in the race, and the empire, so to speak, is striking back. This week's events beg the question: what if the Google of the AI race is, after all the drama of the last three years, Google itself? On November 18, Google unveiled Gemini 3 -- the first AI model built directly into its Search platform -- with what it described as breakthrough performance across coding, mathematics, scientific reasoning, and creative writing. As reported by The Verge, Gemini 3 quickly topped rankings on LMArena, a widely followed leaderboard for AI models, with a record-breaking comprehension and context score. Google highlighted the model's new Deep Think reasoning mode as a leapfrogging advance over previous Gemini and OpenAI releases, offering the technical community a tool that excels at both multimodal tasks and deeper logic challenges. In testing, Google said that Deep Think outperformed Gemini 3 Pro on Humanity's Last Exam (41.0% without the use of tools) and GPQA Diamond (93.8%), while notching an unprecedented 45.1% on ARC-AGI-2, which tests for solving novel challenges. Analysts were effusive about Gemini 3. DA Davidson characterized Gemini 3 as "current state-of-the-art" and its "favorite model generally available today," while Bank of America Securities wrote that it was "another positive step" for Google as it worked to close any "perceived LLM performance gap" to rivals including OpenAI. At the same time, OpenAI has been facing a reported decline in engagement, and The Information reported that CEO Sam Altman warned staffers of "temporary economic headwinds" and "rough vibes" amid increasing competition. Stock movements seem to be telling a story. Nvidia, at the center of a massive web of spending with OpenAI a key player as well, is down nearly 4% since the release of Gemini 3, and down nearly 9% over the last month, while Alphabet is up 11% over the last five days and almost 19% over the last month. Other stocks more closely tied to the OpenAI story have suffered even more, with Advanced Micro Devices down over 13% over five days and over 23% over a month, while Oracle is down over 10% and over 30% over the same respective time periods.
[16]
Nvidia shares fall 3% on report Meta will use Google AI chips
On Monday, The Information reported that Meta is considering using Google's tensor processing units (TPUs) in its data centers in 2027. Meta may also rent TPUs from Google's cloud unit next year, the publication reported. Google launched its first-generation TPU in 2018 and it was initially designed for its own internal use for its cloud computing business. Since then, Google has launched more advanced versions of its chip that are designed to handle artificial intelligence workloads. TPUs are a customized chip and experts say this gives Google an advantage over rivals as it can offer customers a highly efficient product for AI. If Meta uses the TPUs, it would be big win for Google and potential validation of the technology. Nvidia remains the market leader with its graphics processing units (GPUs) that have become the main piece of hardware underpinning the huge AI infrastructure buildout. While Nvidia's dominance is unlikely to be dislodged in the near term, Google's TPUs add further competition into the AI semiconductor market. Companies building AI infrastructure have been searching for a more diversified supply of chips to reduce reliance on Nvidia. Meta is among the biggest spenders on AI infrastructure, with the company projecting its capital expenditure to stand between $70 billion to $72 billion this year.
[17]
Nvidia plunges as AI chip rivalry with Google escalates
Gift 5 articles to anyone you choose each month when you subscribe. Meta Platforms is in talks to spend billions on Google's AI chips, The Information reported, adding to a months long share rally as the search giant has made the case it can rival Nvidia as a leader in artificial intelligence technology. A deal would signal growing momentum for Google's chips and long-term potential to challenge Nvidia's market dominance, after the company earlier agreed to supply up to 1 million chips to Anthropic PBC.
[18]
Google's reported chip deal with Meta shakes up the AI market, pressuring Nvidia stock
Google is reportedly in talks to sell billions of dollars' worth of its custom artificial intelligence chips to Meta Platforms, a potential deal that sent ripples through the stock market. News of the talks caused the Nvidia stock price to dip while boosting the Google stock price, as investors began to reconsider the competitive balance in the high-stakes market for AI computing power. Meta is considering using Google's Tensor Processing Units (TPUs) in its own data centers starting in 2027. This development is significant because it suggests a major tech company is looking for a serious alternative to Nvidia, and it could have a long-term impact on the Meta stock price as the company diversifies its suppliers for critical components. For years, Nvidia has been the undisputed leader in chips designed for artificial intelligence. Its powerful processors have been essential for nearly every major company, from Meta to OpenAI, for developing and running complex AI systems. This near-monopoly has been a primary reason for the incredible growth in the Nvidia stock price. However, Google's potential deal with Meta signals that its custom-built TPUs are emerging as a credible and powerful alternative. This isn't the first time Google has attracted a major AI player; the company previously secured a deal to supply up to 1 million of its TPUs to Anthropic. A partnership with a giant like Meta, however, would be a much bigger validation. The market reacted swiftly to the news. The Nvidia stock price fell by as much as 3% in premarket trading, a clear sign of investor concern. In contrast, shares of Google's parent company, Alphabet, gained 2.4%, building on recent optimism around its latest Gemini 3 Pro AI model. For Google, this is about more than just a single sale; it's a chance to finally monetize a decade-long investment in custom chip design. Successfully supplying a customer the size of Meta would not only bring in billions in revenue but also prove that its TPUs can compete with Nvidia's best on both performance and efficiency. This potential for a huge new revenue stream is a significant factor supporting the recent rise in the Google stock price. For Meta, the deal is a smart and strategic move. The company plans to spend at least $100 billion on its data centers in 2026, and relying almost entirely on a single supplier for the most critical hardware creates a major business risk. By bringing in Google's TPUs, Meta can reduce this risk, increase its negotiating power, and potentially lower its long-term costs. This kind of prudent supply chain management could positively influence the long-term outlook for the Meta stock price. For some time, companies around the world have been worried about their overreliance on Nvidia. While Nvidia's processors were originally designed for graphics, they turned out to be perfect for training AI systems. Google's TPUs, on the other hand, were designed from the very beginning with only artificial intelligence in mind. The potential deal between Google and Meta is a clear sign that the market for AI chips is becoming more competitive. While the Nvidia stock price has long reflected a dominant market position, the emergence of powerful alternatives means the race is far from over. Investors will be watching closely, as the battle to supply the computing power for the next generation of technology will have a lasting impact on the stock prices of all the major companies involved.
[19]
Google Is Coming for Nvidia's Crown in the AI Race
Want more stock market and economic analysis from Phil Rosen directly in your inbox? Subscribe to Opening Bell Daily's newsletter. The market is acting like Google is coming for Nvidia. Alphabet's vertical integration with its new Gemini 3 AI model and its custom AI chips has fueled enthusiasm for its standing in the AI race while spooking Nvidia shareholders, creating a widening performance gap between the two stocks. Specifically, worries seem to be rising that the customers who have long relied on Nvidia chips could soon turn to Google. Indeed, The Information reported this week that Meta is in talks to use Google's Tensor Processing Units (TPUs). That, in theory, redirects billions in business from Meta out of Nvidia's pocket and into Google's. The report alone was enough to wipe out some hundreds of billion in Nvidia's market cap on Tuesday while boosting Alphabet's, shrinking the valuation gap between the two heavyweights to its narrowest since April. To be clear, Nvidia's graphics processing units (GPUs) remain the gold standard for the AI industry. But Google's TPUs -- which power its highly praised Gemini 3 -- are cheaper to develop and require less power. Some industry experts estimate that TPUs offer up to four times better performance per dollar than comparable GPUs. So while the technology itself may not be apples-to-apples competitive, the economics of choosing one over the other does seem to be a hit against Nvidia, in the market's view. Nvidia, for its part, seemed to brush off the news entirely. "We're delighted by Google's success," Nvidia's communications team wrote in a statement. "They've made great advances in AI and we continue to supply to Google. Nvidia is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done." Shares of Alphabet have more than doubled the returns of Nvidia so far this year, though only in recent weeks have technologists seemed to concede that Google could be winning the AI race. Its outperformance is underscored by its unique "full stack" advantage: That's left companies like Oracle, which bought billions of dollars worth of Nvidia chips to rent out, lagging the market as it reprices to a landscape that includes Google's more economical alternatives. While both companies are sure to compete for years to come, these developments confirm that the AI chip battle is no longer a monopoly. The final deadline for the 2026 Inc. Regionals Awards is Friday, December 12, at 11:59 p.m. PT. Apply now.
[20]
Nvidia's Business Is Booming. Its Stock Is Falling. What Gives?
Reports that Meta is in talks to use Google's custom AI chips added to concerns that Nvidia's dominance in the market could be challenged. Nvidia blew past high expectations when it reported quarterly results last week. Its stock is getting hit anyway. Shares of the chip giant are down more than 8% since it reported record quarterly revenue and earnings and offered up an outlook that easily exceeded Wall Street's expectations. As of Tuesday, the stock is trading about 17% below its record high from late October, when optimism about the AI boom helped make Nvidia the world's first $5 trillion company last month. Since then, it's been among the stocks hit hardest by concerns about an AI bubble. Some investors are worried that hyperscalers like Microsoft (MSFT) and Oracle (ORCL) will be left with a glut of data center capacity -- and, potentially, piles of debt -- if AI demand falls short of expectations. Others argue that, even if demand is as strong as Silicon Valley expects, the tech giants are still likely spending money inefficiently in their haste. Nvidia has added to bubble fears by investing in several customers, including ChatGPT maker OpenAI and cloud provider CoreWeave (CRWV). Those deals have drawn comparisons to the vendor financing that helped to inflate the Dotcom Bubble of the late 1990s. "If you define a 'bubble' as what we saw in 2008, leverage, speculation, and no underlying demand, that's not what's happening today," said Carmen Li, founder and CEO of GPU market intelligence firm Silicon Data, in written comments. "But if you define it as pockets of overbuild or mispriced expectations about residual value, then yes, there are areas where investors should be cautious," she added. Nvidia stock has also been hit by concerns about its dominance in the AI chip market. Shares were down 6% in recent trading after The Information on Monday evening reported Meta Platforms (META) was in talks to spend billions on Alphabet's (GOOG) AI chips for its data centers starting in 2027. Meta is also reportedly considering renting Alphabet chips as early as next year. Microsoft, Amazon, Alphabet, and Meta have been working on custom chips for years in a bid to lower costs and lessen their reliance on Nvidia. Google's talks with Meta, and the success of the former's latest model, Gemini 3, have boosted hopes on Wall Street that those investments are paying off. "We're delighted by Google's success," Nvidia wrote on X Tuesday. "They've made great advances in AI and we continue to supply to Google. Nvidia is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done." Citi analysts in a note on Tuesday said they expected custom chips to account for 45% of the AI accelerator market by 2028, up from an estimated 35% today.
[21]
Google is in talks to sell custom AI chips to Meta
Google is reportedly negotiating a multibillion-dollar agreement to supply its custom AI chips to Meta Platforms, a move that would mark a significant strategic shift for Google as it seeks to challenge Nvidia's dominance in the AI infrastructure market. According to a report from The Information on Monday evening, the proposed deal would see Meta deploying Google's Tensor Processing Units (TPUs) within its own data centers starting in 2027. Additionally, Meta plans to begin renting TPU capacity directly from Google Cloud as early as next year. This potential partnership represents a departure from Google's traditional business model, which has historically restricted access to its TPUs exclusively through cloud rentals on the Google Cloud Platform rather than selling the hardware for outside use. Following the report, Alphabet shares rose between 2.1% and 2.5% in after-hours trading, while Nvidia stock dipped 1.8%. The news arrived shortly after Alphabet stock had already surged more than 6% during the regular session, buoyed by the positive reception of its recently launched Gemini 3 AI model. Google is actively pitching its TPU technology to a broader range of clients beyond Meta, including high-frequency trading firms and major financial institutions. The company is emphasizing that on-premises deployment of these chips can help organizations meet strict security and compliance requirements. Currently, Meta relies heavily on Nvidia GPUs to power the AI infrastructure serving its more than 3 billion daily users. Google Cloud executives estimate that expanding TPU adoption could allow the company to capture up to 10% of Nvidia's annual revenue, translating to billions in potential earnings amidst ongoing global supply constraints for AI computing power. The deal would serve as a major validation of Google's decade-long investment in custom silicon. The company recently introduced Ironwood, its seventh-generation TPU, which claims to offer four times the performance of its predecessor and is nearly 30 times more energy-efficient than the first Cloud TPU released in 2018. Competition in the sector is intensifying, with AI startup Anthropic committing in October to access up to one million Google TPUs in a deal valued at tens of billions of dollars, citing price-performance and efficiency as decisive factors. Google continues to partner with Broadcom for the design and manufacturing of these chips.
[22]
Nvidia 'delighted' at Google's chip push, but asserts its 'generational' lead
Nvidia welcomes Google's entry into the external AI chip market, even as it asserts its industry leadership. While Google plans to offer its TPUs to other companies, Nvidia highlights its own GPUs as a superior, versatile platform for all AI models across various computing environments. Artificial intelligence chipmaker Nvidia welcomed Google's push to expand its market share, even as it asserted its own leadership in the segment. Reports suggested that Google is planning to market its Tensor processing units (TPUs) to external parties for AI computing, after exclusively using them in-house till now. The Gemini AI parent is looking to capture 10% of Nvidia's business, which is currently the dominant player in the space. "We're delighted by Google's success -- they've made great advances in AI and we continue to supply to Google," Nvidia posted on X. "Nvidia is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done. Nvidia offers greater performance, versatility, and fungibility than ASICs, which are designed for specific AI frameworks or functions," it further added. Announcing estimate-beating third quarter results earlier this month, Nvidia announced strong demand for its chips. CEO Jensen Huang said that companies around the world are shifting from classical computing machines and software relying on CPUs, to AI-infused systems equipped with graphics processing units (GPUs), which are Nvidia's specialty. Google is eyeing the AI chips market as demand for computing power surges. AI companies are looking to secure as much processing power as possible to meet rising individual and enterprise demand, while vying to achieve artificial general intelligence. OpenAI, too, is attempting to develop its own custom silicon through a partnership with Broadcom, with initial deployment likely in the latter half of 2026.
[23]
Broadcom's New Google Chips Could Be 40% Cheaper To Run Than Nvidia's, Analyst Says - Broadcom (NASDAQ:AVGO)
Broadcom Inc (NASDAQ:AVGO) is gaining momentum in the artificial intelligence accelerator race thanks to Alphabet Inc (NASDAQ:GOOGL) (NASDAQ:GOOG) Google's push to scale Tensor Processing Unit-powered computing across both internal systems and future external customers. * AVGO shares are under pressure. Track the latest developments here. BofA Securities analyst Vivek Arya reiterated a Buy rating on Broadcom and increased the price target from $400 to $460. Arya noted Google's growing reliance on its homegrown TPUs as a powerful tailwind for Broadcom, the key silicon design partner behind the chips. Also Read: Three Big Catalysts Could Decide Broadcom's Earnings-Day Pop, Analyst Says After Google trained Gemini 3 entirely on TPUs and signaled plans to rent them out to external customers, the analyst analyzed how the latest TPUv7 stacks up against NVIDIA Corp's (NASDAQ:NVDA) new Blackwell Ultra graphics processing units and refreshed Broadcom's revenue model tied to TPU shipments and pricing. He expects Broadcom's TPU business to accelerate meaningfully. The current TPU average selling price of roughly $5,000 to $6,000 and an estimated two million units in calendar 2025 could grow to $12,000 to $15,000 and more than three million units in 2026. If demand from both Google and new external customers like Anthropic and Meta Platforms Inc (NASDAQ:META) expands, shipments could stretch toward 3.6 million to 3.8 million units, as per Arya. The analyst argued that TPUv7 can outperform Nvidia's GB300 on power efficiency for specific AI training tasks, such as FP8 large language models, producing about 5.4 TFLOPs per watt versus roughly 3.6 TFLOPs per watt for Blackwell Ultra. That advantage can translate to up to 40% lower total cost of ownership for those optimized workloads. Still, he noted performance varies heavily depending on workload type, model support and software optimization -- areas where Nvidia's GPU ecosystem still leads. What Does the Future Hold? Looking ahead, Nvidia's next major architecture, Vera Rubin, could leapfrog TPU on memory technology and system cost when it arrives in late 2026. Broadcom's TPU roadmap may not deliver a major semiconductor upgrade until TPUv8 in 2027, potentially narrowing competitive gains, Arya told. The analyst also flagged risks. If Google shifted from renting TPUs exclusively through Google Cloud to selling them directly, it could create fresh competition for Broadcom's other custom Application-Specific Integrated Circuit customers, including Meta, ByteDance and OpenAI. Google could also expand its design partner roster by adding MediaTek for lower-tier TPU variants, reducing Broadcom's share. Even with those uncertainties, he noted Broadcom remains well-positioned as AI compute spending scales. Arya expected the company's custom ASIC and TPU products to account for up to 15% of the roughly $900 billion AI accelerator market by 2030, up from roughly 8% this year. The analyst modelled more than $23 in Broadcom EPS power by 2030 and modestly adjusted gross margin expectations to reflect a higher mix of compute silicon. Overall, he noted rising TPU leverage and broadening adoption as meaningful growth catalysts for Broadcom as AI infrastructure investment ramps through 2027 and beyond. Arya projected fourth-quarter revenue of $17.445 billion (up from prior forecast of $17.405 billion) and adjusted EPS of $1.83 (down from prior guidance of $1.85). AVGO Price Action: Broadcom stock was trading lower by 2.92% to $391.19 at publication on Monday. Read Next: Alibaba Calls Its New AI Glasses The 'Next Mobile Phone,' Unveiling Tech To Rival Apple, Samsung Photo: Shutterstock AVGOBroadcom Inc$393.34-2.39%OverviewGOOGAlphabet Inc$318.27-0.58%GOOGLAlphabet Inc$318.21-0.61%METAMeta Platforms Inc$643.55-0.68%NVDANVIDIA Corp$180.161.79%Market News and Data brought to you by Benzinga APIs
[24]
NVIDIA Hits Back at Claims Google's TPUs Could Overtake Its AI Stack, Saying It Offers "Greater Performance and Versatility" Than ASICs
NVIDIA has responded to all the 'buzz' around Google's TPUs and reports of it being externally adopted, as the firm says that ASICs are limited to a "specific AI frameworks". NVIDIA Says ASICs Are Limited To a Specific Workload, While the Firm Is Responsible For the 'Entire' AI Revolution Google TPUs have been the 'talk of the town' in recent times, especially since there have been reports that the AI chips from the company are being adopted externally, from the likes of Meta and Anthropic. Suddenly, a narrative has emerged, claiming that an ASIC manufacturer is poised to replace NVIDIA in a segment dominated by Team Green for several years now. Building upon this, NVIDIA has responded to reports about Google's TPUs, saying that they are "delighted" by the success of the tech giant. However, at the same time, the firm has responded to the competition with ASICs. We're delighted by Google's success - they've made great advances in AI, and we continue to supply to Google. NVIDIA is a generation ahead of the industry - it's the only platform that runs every AI model and does it everywhere computing is done. NVIDIA offers greater performance, versatility, and fungibility than ASICs, which are designed for specific AI frameworks or functions. - NVIDIA's spokesperson in a statement to Wccftech The statement comes from an earlier report by The Information, which claimed that Meta is in line to purchase 'billions of dollars' worth of Google TPUs for their AI workloads, and eventually, it was projected that the external adoption of Google's ASICs could account for 10% of NVIDIA's AI revenue. The idea here is that Google has been successful in vertically integrating its AI workloads with self-built TPUs, particularly in inference workloads, and has managed to achieve superior performance parameters compared to what NVIDIA offers. It won't be wrong to say that, out of all ASIC pursuants, Google is one of the most competitive ones, especially since the firm has been in the game for almost a decade now. But, based on what NVIDIA believes, ASICs are designed for "specific frameworks", while the company's tech stack, whether it is the computing architecture or the CUDA platform, targets the whole AI ecosystem, from pre-training to post-training to even tuning LLMs. And, it's also important to note that Google is a major customer of NVIDIA's AI hardware, so TPUs are just a part of a wider picture currently dominated by NVIDIA. It would be interesting to see how the race between ASICs and NVIDIA's technology progresses, but the segment will certainly become significantly more competitive, especially as we move into a world where inference is the 'real deal' for AI giants. Follow Wccftech on Google to get more of our news coverage in your feeds.
[25]
This Trillion-Dollar Stock Could Be the Next Nvidia | The Motley Fool
Nvidia (NVDA 1.83%) has been one of the best stocks to own since the AI revolution kicked off in 2023. However, there are questions about its dominance heading into 2026. There is rising competition from AMD, but a relatively unheralded competitor could be challenging Nvidia. Who is this competitor that is making headlines in recent weeks? It's not Alphabet and its high-powered Tensor chips; it's Broadcom (AVGO +1.37%). Nvidia makes graphics processing units (GPUs), which are well-suited for any task that requires accelerated computing. These devices were the top picks for many companies looking to train artificial intelligence workloads, but they are incredibly expensive. While AMD has offered a cheaper alternative, their supporting hardware and software are not the same as Nvidia's. However, there's another option many companies are turning to. Broadcom's custom AI accelerators are a different style of computing unit than the GPU. Instead of being suited to run a wide variety of workloads, these custom AI accelerator chips are tailored to run a specific style of workload. This can increase performance and drive down cost, at the price of flexibility. As we move toward an inference-heavy computing power deployment, the workloads are fairly known, so it doesn't require as many Nvidia GPUs to train the AI model. In their place, companies can deploy custom AI chips from Broadcom. Broadcom and Alphabet collaborated to design the promising tensor processing units (TPUs). Alphabet has been using these for a long time for internal use. These days, external clients can even rent access to them via its cloud computing platform, Google Cloud. However, that access may be changing. Reportedly, Alphabet is in talks with Meta Platforms to sell billions of dollars worth of TPUs to them. While this will benefit Alphabet, it also boosts Broadcom. It gets a cut of every TPU Alphabet purchase (or sale) because it is a codesigner of the chip. It's unknown if this deal will go through or how much Meta will buy, but if it does come to fruition, don't be surprised if deals like this start to get announced more frequently. This could also drive other AI hyperscalers to work directly with Broadcom to spec in their custom AI chip, which will further boost its sales. The downside risk of Broadcom's AI business is almost nothing, and it has a ton of upside ahead. This makes the stock a potential massive winner in 2026, and investors should consider adding it to their portfolio. However, there are a few caution signals with the stock. Although Wall Street analysts have recently reworked Broadcom's forward earnings projections, it's still an expensive stock when compared to Nvidia or Alphabet. Furthermore, Broadcom isn't solely an AI business. In Q3 FY 2025 (ending Aug. 3), only $5.2 billion of its $15.9 billion total came from AI-related revenue streams. This division is projected to rapidly grow, with management expecting $6.2 billion in revenue in Q4. However, Broadcom isn't as much of a pure play as some believe it is, and weakness from its core business could overshadow its AI aspirations until the AI segment is large enough to become the majority of Broadcom's business. Still, I think Broadcom will have an excellent 2026 as more clients adopt a custom AI chip rather than a general-purpose one. This does not mean the end for Nvidia -- just Nvidia's level of dominance. I think both Nvidia and Broadcom are still great investments to make, but if the AI computing market keeps trending in this direction, I think Broadcom could end up being the next Nvidia.
[26]
Nvidia-Google AI chip war heats up as world's most valuable company 'delighted by Google's success' despite its own stock fall after report of Meta-Google chip deal
Nvidia-Google AI chip: Nvidia is facing increased competition in the AI hardware market as Google aggressively pitches its TPUs to major companies like Meta. While Nvidia acknowledges Google's AI advancements, it asserts its GPUs remain a generation ahead in performance and versatility, powering the vast majority of the AI accelerator market. The battle for dominance in AI hardware is intensifying, and for once, industry titan Nvidia appears to be on the defensive. The $4 trillion chipmaker, long seen as the undisputed leader powering the AI boom, took the unusual step of publicly responding to a report suggesting that Meta may shift part of its future AI infrastructure to Google's in-house chips. On Tuesday, Nvidia posted a statement on X after shares dropped more than 2.5 per cent following a report from The Information. The report claimed Google has begun aggressively pitching its AI Tensor Processing Units (TPUs) to major companies, including Meta and several major financial institutions, as alternatives to Nvidia's GPUs. While Google already offers TPUs through its cloud, expanding them into customers' own data centers would significantly escalate the competition between the two tech giants. Meanwhile, Alphabet stock climbed for a third consecutive day, driven by the strong reception of Google's new Gemini 3 AI model, which earned praise from high-profile figures such as Salesforce CEO Marc Benioff. In a defensive tone, Nvidia acknowledged the success of one of its biggest customers, writing, X: "We're delighted by Google's success -- they've made great advances in AI, and we continue to supply to Google. Nvidia is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done." Google's AI chips have long been seen as powerful but limited, highly efficient for Google's own systems yet no real threat to Nvidia's dominant GPUs, which power more than 90% of the global AI accelerator market. That gap was largely due to design differences. In simple words Google's TPUs are ASICs built for specific AI tasks, while Nvidia's GPUs are versatile, general-purpose workhorses. Nvidia underscored that distinction in its X post, stressing that its GPUs deliver "greater performance, versatility, and fungibility" than ASICs and highlighting its new Blackwell architecture as still a generation ahead.
[27]
Morgan Stanley Sees 41% Upside For Nvidia, Boosts Broadcom Target on TPU Dominance - Alphabet (NASDAQ:GOOG), Broadcom (NASDAQ:AVGO)
Morgan Stanley has raised its price targets for Nvidia Corporation (NASDAQ:NVDA) and Broadcom Inc (NASDAQ:AVGO), citing the ongoing momentum in artificial intelligence (AI). Analyst Boosts Targets On AI Strength Morgan Stanley's analyst Joseph Moore reiterated the bank's overweight rating for Nvidia, setting a new price target of $250, up from $235, representing a 41% increase from the chipmaker's Friday closing price, reported CNBC. Moore wrote that NVIDIA is continuing to hold a dominant share of the market and that concerns about competitive threats are "becoming overstated," though he added that it is still unclear what might shift investor sentiment. Moore also maintained an overweight rating for Broadcom and increased the price target to $443 from $409. This new target represents a 10% rise from Broadcom's Friday closing price. Moore highlighted Broadcom's significant AI exposure as a positive factor and commended the company's growth potential. He specifically mentioned Broadcom's tensor processing unit (TPU) as a driving force. However, Moore cautioned that this increase in TPU demand might replace other chip expectations for Broadcom, such as the Meta Platforms Inc. (NASDAQ:META) AI builds. See Also: Trump's Crypto Advisor David Sacks Hits Back At Conflict-Of-Interest Report As 'A Bunch Of Anecdotes,' Calls It 'Willfully Mischaracterized' Dan Ives, Cramer Upbeat On Nvidia, Broadcom The AI revolution has been a significant driver for the tech sector, with experts predicting its continued growth. Wedbush Securities Managing Director Dan Ives recently stated that "it's Nvidia's world, everyone else is paying rent," underlining the company's dominant position in the AI space. Meanwhile, CNBC's Jim Cramer suggested that Broadcom is poised to be the primary beneficiary of a potential deal between Alphabet Inc. (NASDAQ:GOOG) (NASDAQ:GOOGL) and Meta. The recent price target hikes by Morgan Stanley further underscore the positive outlook for these chipmakers, driven by the AI sector's continued growth. Price Target: On a year-to-date basis, Nvidia stock and Broadcom stock surged 27.96% and 73.70%, respectively. Benzinga's Edge Rankings place Nvidia in the 93rd percentile for quality and the 98th percentile for growth, reflecting its strong performance in both areas. Check the detailed report here. READ NEXT: Jensen Huang Says 'Insane' That Some Nvidia Managers Are Telling Employees To Limit AI Usage: Report Image via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. AVGOBroadcom Inc$395.70-1.80%OverviewGOOGAlphabet Inc$317.35-0.87%GOOGLAlphabet Inc$317.45-0.85%METAMeta Platforms Inc$637.55-1.61%NVDANVIDIA Corp$174.11-1.63%Market News and Data brought to you by Benzinga APIs
[28]
Could the Nvidia Killer Be Hiding in Plain Sight? 3 Stocks to Watch | The Motley Fool
Nvidia was an early leader in AI and has reaped the rewards. But it may soon need to defend its crown. The artificial intelligence (AI) market is booming, and could grow from $235 billion last year to $631 billion by 2028. The top AI stocks have already enjoyed tremendous returns since early 2023, and the future looks bright. Remember, the internet hasn't stopped growing since its initial boom decades ago. Nvidia has arguably been the most apparent AI winner thus far. Its GPU chips have become the de facto choice for AI hyperscalers training and operating AI models, a Golden Goose that has generated $187 billion in revenue over the past four quarters alone. But most of that has come from a relatively small handful of these hypercalers. As spending for all these data center projects adds up, the pressure is building for hyperscalers to cut costs. These three companies, which are, ironically, Nvidia's own customers, could kill Nvidia's Golden Goose. They are threats sitting in plain sight. After the recent unveiling of its Gemini 3 AI model, Alphabet (GOOGL +0.06%)(GOOG 0.05%) might pose the most serious threat to Nvidia's AI dominance. From most accounts, Gemini 3 is a highly impressive step forward for AI models. But perhaps the most remarkable thing about Gemini 3 is that Alphabet apparently trained it on its own proprietary AI chips rather than Nvidia's GPUs. Nvidia's GPUs use the company's CUDA programming to efficiently harness the immense computing power of GPUs for AI tasks. However, Alphabet's Tensor Processing Units (TPUs) are application-specific integrated circuits (ASICs), meaning they are designed and built for one job and one job only. In this case, that's Alphabet's AI. Alphabet's success in training Gemini 3 with its TPUs shows that Nvidia's GPUs aren't irreplaceable. That doesn't mean other companies will easily do what Alphabet did, but it does send the message that it's possible. Over time, other hyperscalers could design their own AI ASICs, potentially eating into Nvidia's business. Other hyperscalers have also developed custom AI chips. Amazon (AMZN +1.77%) operates AWS, the world's leading cloud computing services platform. AI has created tailwinds for AWS because it primarily runs in the cloud, as does most modern software. Amazon continues to build data centers to increase its cloud capacity, and those data centers require many chips. Amazon has developed its own AI chip, Trainium, and has begun to assert it more. The company has a partnership with Anthropic, the developer of the AI application Claude. The two companies recently announced the activation of Project Rainier, a massive AI chip cluster. It's using nearly half a million Trainium2 chips, and that will scale to over 1 million by the end of this year. It doesn't necessarily mean Nvidia is out of the picture, but it loosens Nvidia's stranglehold on the market. At the very least, Nvidia, which has enjoyed 70% gross margins over the past year, could begin feeling some pressure. Between Amazon's own chip needs and those of a prominent AI developer such as Anthropic, further leaning into Trainium is another potential headwind for Nvidia's growth. If you've noticed a theme by now, you'd be correct. Some of Nvidia's biggest customers have begun looking for ways to reduce their dependence on Nvidia's GPUs. Microsoft (MSFT +1.34%) is yet another name on this list for similar reasons. Not only is Microsoft's Azure the world's second-leading cloud services platform, but it also has a close partnership with ChatGPT developer OpenAI. OpenAI is the leading AI developer, as ChatGPT is the most popular AI app. However, the company is operating at heavy losses and has recently faced intensifying scrutiny over how it will fund $1.4 trillion in AI infrastructure it has signed agreements for. It seems that the days of writing blank checks for GPUs are nearly over. Microsoft and OpenAI recently restructured their partnership and will work more closely on custom AI chips. OpenAI has been designing custom chips with Broadcom, and Microsoft CEO Satya Nadella recently suggested that Microsoft will contribute to those efforts so that both companies can benefit. Once again, it's a shot across Nvidia's bow, a signal that its lucrative dominance in AI data center chips may soon face serious challenges.
[29]
Meta-Google AI chip talks signal new challenge to Nvidia's dominance
Meta and Google are discussing a big deal for Google's AI chips. The plan could help Meta use new AI technology and give Google a stronger place in the AI chip market. The talks also show growing competition with Nvidia. Businesses are looking for cheaper and faster AI chips, and Google wants more people to use its TPUs. Meta is talking to Google to spend billions of dollars on Google's AI chips starting in 2027. These talks also include Meta possibly renting Google Cloud chips as early as next year, The Information reported. Google wants more customers to use its Tensor Processing Units (TPUs) for AI work in their own data centers. This plan is different from Google's old strategy because earlier it used its TPUs only inside Google's own data centers. If Meta signs the deal, it could boost the market for Google's chips and put it in direct competition with Nvidia, the report said. Some Google Cloud executives believe this strategy could help Google earn up to 10% of Nvidia's yearly revenue, worth billions of dollars. Alphabet shares jumped more than 4% in premarket trading, giving it a chance to reach a $4 trillion valuation, Reuters reported. Broadcom, which helps Google build these AI chips, saw its stock rise 2%. Nvidia's shares fell 3.2% after the news. A deal with Meta would be a huge win for Google because Meta is one of the biggest Nvidia buyers, with up to $72 billion planned in spending this year. Google is already a major winner in the AI boom because many businesses are using its cloud services. Alphabet, Meta, and Nvidia did not respond to requests for comment, Reuters reported. Companies are looking for cheaper and more available options than Nvidia's expensive and limited GPUs, which is why demand for custom chips like TPUs is rising. AI startup Anthropic said last month it will expand its Google deal to use up to one million Google AI chips, worth tens of billions of dollars. Google has been gaining momentum with investments from Warren Buffett's Berkshire Hathaway, strong growth in its cloud unit, and good reviews of its Gemini 3 model. Renting Nvidia chips still brings Google Cloud big revenue. But to truly challenge Nvidia, Google must overcome 20 years of Nvidia's proprietary CUDA software, which millions of developers depend on. Q1. Why is Meta talking to Google about AI chips? Meta wants to buy and rent Google's AI chips to power its data centers and reduce its reliance on Nvidia. Q2. How does the Meta-Google chip deal affect Nvidia? The deal could create strong competition for Nvidia because Meta is one of Nvidia's biggest customers. (You can now subscribe to our Economic Times WhatsApp channel)
[30]
Dan Ives Calls Nvidia The 'Indisputable Rocky Balboa' Of AI And Gene Munster Agrees As The Jensen Huang-Led Tech Giant Faces Rare November Slump - Alphabet (NASDAQ:GOOG), Broadcom (NASDAQ:AVGO)
Nvidia Corp. (NASDAQ:NVDA) might be facing fresh concerns, but analysts continue to say that it is the champion of the AI revolution. Nvidia's AI Dominance Faces New Competition On Wednesday, Wedbush analyst Dan Ives took to X and said that with trillions expected to be invested in the coming years, multiple Big Tech companies, including Alphabet Inc.'s (NASDAQ:GOOG) (NASDAQ:GOOGL) Google and Broadcom Inc. (NASDAQ:AVGO) in the TPU chip space, stand to gain alongside Nvidia. However, he cautioned that Nvidia is still "the indisputable Rocky Balboa champion of the AI Revolution." See Also: Comparative Study: NVIDIA And Industry Competitors In Semiconductors & Semiconductor Equipment Industry Gene Munster Backs Nvidia, Predicts Bright AI Infrastructure Future Deepwater Asset Management's managing partner Gene Munster reposted Ives' post on social media and said, "I agree with @DivesTech, Nvidia is the champ. My prediction: AI infrastructure will be a bright spot for investing in 2026." Notably, Deepwater owns Nvidia shares. Meta And Google Deal Shake Up AI Chip Market The statement comes after reports that Meta Platforms Inc. (NASDAQ:META) may tap Google AI chips for its data centers. Google's tensor processing units are designed to deliver efficient, cost-effective performance across AI workloads, directly challenging Nvidia's dominance. Broadcom Inc. (NASDAQ:AVGO) is also positioned to benefit from potential deals due to CEO Hock Tan's board role at Meta, according to Jim Cramer. Meanwhile, Nvidia publicly acknowledged Google's advancements in AI while reaffirming its own leadership. In a social media post, the company expressed that it was "delighted" by Google's progress and underscored that it remains "a generation ahead" of its competitors. Nvidia Stock Pullback Raises Questions Nvidia's stock has experienced unusual volatility in November. After climbing 33.03% over the past six months, shares fell 2.6% on Tuesday before rebounding 1.4% Wednesday, marking a five-day slip of about 8%. Historically, November has been one of Nvidia's strongest months, with an average return of 10.55% and rare double-digit declines. As of Nov. 25, Nvidia shares have declined 14% for the month, marking their worst monthly performance since September 2022. Still, the company has returned over 1,300% in three years, underscoring its role as a generative AI pioneer. Benzinga's Edge Stock Rankings indicate that NVDA maintains a strong long-term price trend, though its short and medium-term performance remains negative. Click here to see how it compares with industry peers. Read Next: Google TPUs Are 'Cost-Effective Hedge,' Not Replacement For Nvidia, Strategist Says Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo: Mehaniq/ Shutterstock AVGOBroadcom Inc$397.42-0.04%OverviewGOOGAlphabet Inc$320.800.16%GOOGLAlphabet Inc$320.570.19%METAMeta Platforms Inc$633.750.02%NVDANVIDIA Corp$179.50-0.42%Market News and Data brought to you by Benzinga APIs
[31]
Meta in Talks to Buy Google AI Chips, a Multi-Billion-Dollar Deal to Challenge NVIDIA
Meta and Google Join Hands to Rival NVIDIA's Dominance in the AI Chip Market Meta Platforms is in advanced discussions with Google to buy Alphabet's AI chips to use in its data centers starting in 2027. According to The Information, the multi-billion-dollar deal may potentially reshape the competitive semiconductor market dominated by NVIDIA. As part of the , Meta is considering renting Google Cloud's tensor processing units as early as next year. This would be a significant shift for Google as the company had limited the usage of its TPUs largely to its own data centers until recently.
[32]
Nvidia shares sink 4% after report of Meta in talks to spend billions...
Shares in chipmaker Nvidia sank about 4% Tuesday following a report that Meta is in talks to spend billions of dollars on chips from rival Google. As part of the deal, Meta - the social media giant behind Facebook, Instagram and WhatsApp - would spend billions to use Google's tensor processing units, or TPUs, in its own data centers in 2027, a person involved in the discussions told the Information. The company is also in talks to rent Google chips from Google Cloud next year, the report stated. Google could take as much as 10% of Nvidia's annual revenue as a result of the deal, since Meta currently uses Nvidia's graphics processing units, or GPUs, a person who heard the remarks told the Information. That would equal billions of dollars in added revenue for Google, which is owned by Alphabet. "Google Cloud is experiencing accelerating demand for both our custom TPUs and Nvidia GPUs; we are committed to supporting both, as we have for years," a Google spokesperson told The Post in a statement. An Nvidia spokesperson said the company is "delighted by Google's success - they've made great advances in AI, and we continue to supply to Google." "NVIDIA is a generation ahead of the industry - it's the only platform that runs every AI model and does it everywhere computing is done," the spokesperson continued. Meta did not immediately respond to The Post's requests for comment. GPUs are specialized chips that are the workhorses of AI, drawing billions of dollars in investment. Google's TPUs are even more specialized chips and are "at the heart of some of Google's most popular AI services, including Search, YouTube and DeepMind's large language models," according to the company. Google has long rented out its AI chips to customers for use in Google Cloud data centers. But to better compete with Nvidia, it has been ramping up efforts to get companies like Meta to use Google chips in Meta's own data centers, according to people involved in the talks. The company has been marketing its TPUs as a cheaper alternative to Nvidia's GPUs. Major Nvidia customers like Oracle have found it difficult to generate strong profit margins while shelling out so much capital to rent Nvidia chips. Google said customers have been looking to work with them because they want higher security and compliance standards for sensitive data, a person with direct knowledge of the matter told the Information. But catching up to Nvidia is no small feat. The chipmaker is dominant in the AI sector, with a staggering $4.2 trillion market cap - making it the most valuable company in the world. Nvidia CEO Jensen Huang has been keeping a close eye on Google's TPU progress and its talks with customers, so it is possible Nvidia could swoop in with a Meta deal before Google. Google has been making strides in its AI units, though. It released its biggest large language model, Gemini 3, earlier this month, quickly drawing positive reviews. Meta has also been talking to Google about using its TPUs to train new AI models in a process known as inference, as opposed to just powering existing models, a person involved in the talks told the Information. Analysts have argued that for anyone to really rival Nvidia, they will have to master chips for inference -- "the ability of trained AI models to recognize patterns and draw conclusions from information that they haven't seen before," according to IBM. Meta has also been developing its own chips for inference to save on costs and reduce its reliance on Nvidia, a source told the Information. Google has developed software known as TPU command center that takes aim at Nvidia's Cuba software, which is seen as an industry standard. Major Nvidia customers have reportedly been reaping the benefits from Google's advances. After Google announced a deal to provide up to 1 million TPUs to Anthropic, Huang quickly revealed plans to invest billions of dollars into Anthropic. When OpenAI announced plans to rent TPUs from Google, Huang signed a deal to invest up to $100 billion in OpenAI.
[33]
Nvidia Stock Slides on Alphabet Competition Fears: Is This a Buy-the-Dip Moment? | The Motley Fool
Shares of Nvidia (NVDA +1.42%), the leading supplier of artificial intelligence (AI) accelerators for data centers, were knocked down hard this week when reports circulated that social media company Meta Platforms (META 0.41%) may be considering a chip deal with Google parent Alphabet (NASDAQ: GOOGL) (GOOG 1.04%) to help power some of its data centers beginning in 2027. This news may have caught Nvidia investors off guard, as Nvidia has been the center of the AI data center buildout boom for years. Additionally, the news was particularly surprising, as Nvidia had made no indication during its most recent earnings report of any meaningful U.S.-based competition in the AI chip space. Of course, 2027 is far off -- and a lot can change between now and then. But the fact that Meta is even reportedly considering relying on an Nvidia alternative for some of its AI data center buildout is concerning its shareholders, since Nvidia now generates almost all of its revenue from data center chip sales. Here's a closer look at how the report shows why this may be a red flag for Nvidia investors, and not a buy-the-dip moment. Before the Meta-Alphabet news hit, Nvidia's fundamentals looked anything but fragile. In the third quarter of fiscal 2026, which ended on Oct. 26, revenue reached $57 billion, up 62% year over year and 22% from the prior quarter. Data center revenue was $51.2 billion, up 66% year over year and 25% sequentially, as demand for Blackwell GPUs continued to ramp up. Additionally, management's tone remained confident. Nvidia CEO Jensen Huang emphasized in the company's fiscal third-quarter update that sales of its Blackwell GPUs were "off the charts," and it noted that its cloud GPUs were completely sold out. Nvidia's guidance was also a signal of the strong demand for the company's products. Management said it expected fourth-quarter fiscal 2026 revenue of about $65 billion, which would represent another meaningful increase versus the already record fiscal third quarter. But could Nvidia's dominance be challenged over the next few years? Meta is reportedly in talks to spend billions of dollars on Alphabet's tensor processing units (TPUs) beginning in 2027 and could start renting the chips through Alphabet's Google Cloud even earlier. The market reaction to this report has been swift. Nvidia shares fell sharply, while Alphabet stock moved higher. Even Broadcom, which helps design and manufacture Alphabet's TPUs, saw its stock soar on the news. The worry some Nvidia investors may have is about pricing and the durability of today's extraordinary data center margins over a longer horizon. If competition heats up, pricing on Nvidia's chips could come under pressure. Today, Nvidia still dominates the market for AI chips -- and many customers will likely continue to prefer its products. Still, the Meta-Alphabet talks highlight the risks of large buyers pivoting away from Nvidia's products the moment viable alternatives meet their performance and cost targets. If Meta ultimately commits billions of dollars to TPUs for new data centers, even while continuing to use Nvidia GPUs elsewhere, that could meaningfully chip away at Nvidia's share of the next wave of AI infrastructure spending -- and it could erode Nvidia's pricing power. Even after the recent pullback, Nvidia's stock is still priced for continued dominance. The stock currently commands a price-to-earnings ratio of 42. That might be defensible if Nvidia can maintain rapid revenue growth and extraordinary margins while AI spending expands. But if alternatives to its offerings begin to make inroads, the company's growth could slow at the same time that margins come under pressure. The Meta-Alphabet headlines, of course, do not prove that Nvidia's moat is narrowing. It's too early to know. The headlines do, however, reinforce a key risk: Nvidia relies heavily on a small group of cloud and AI leaders that are ramping up investments in internal and alternative silicon to control costs and reduce dependence on a single supplier. If those efforts succeed, Nvidia may face more pricing pressure and slower growth later in the decade than current valuations suggest. For investors, that makes this sell-off tricky. The underlying business remains exceptionally strong, but the stock still embeds high expectations at a time when Nvidia's largest customers are exploring credible competitors. Given that combination, treating this pullback as an automatic buy-the-dip moment arguably looks premature rather than prudent.
[34]
Nvidia Faces Fresh Competitive Risk as Google TPUs Gain Traction | Investing.com UK
NVIDIA Corporation (NASDAQ:NVDA) shares fell sharply in premarket trading on Tuesday, November 25, 2025, following reports that Meta Platforms (NASDAQ:META) is in discussions to purchase billions of dollars worth of Google's tensor processing units (TPUs). The news suggests Google is making significant progress in challenging NVIDIA's dominant position in the AI chip market. Trading at $175.90 in premarket, down $6.65 or 3.64% from the previous close of $182.55, NVIDIA's stock decline reflects investor concerns about potential competition from Google's specialized AI accelerators. According to The Information, Meta Platforms is in talks to deploy Google's tensor processing units in its data centers starting in 2027, with the possibility of renting TPUs from Google's cloud division as early as next year. This would represent a significant validation of Google's TPU technology, which has been designed specifically for AI workloads since its first-generation launch in 2018. An agreement with Meta, one of the world's largest spenders on AI infrastructure with projected capital expenditure of $70-72 billion this year, would establish TPUs as a credible alternative to NVIDIA's market-leading GPUs. The potential deal highlights the broader industry trend of companies seeking to diversify their chip supply and reduce dependence on NVIDIA, whose graphics processing units have become the gold standard for AI development. Google's TPUs offer customization advantages as application-specific integrated circuits designed for discrete purposes, contrasting with NVIDIA's GPUs that were originally created for graphics rendering but proved well-suited for AI tasks. Bloomberg Intelligence analysts estimate Meta could spend $40-50 billion on inferencing chip capacity in 2026 alone, suggesting substantial opportunity for alternative chip providers. While NVIDIA maintains its dominant market position with a $4.437 trillion market cap and impressive performance metrics including a PE ratio of 45.30 and year-to-date returns of approximately 36%, the Google-Meta news triggered immediate market reactions. Alphabet shares surged over 6% on Monday and climbed an additional 2.4% in premarket trading Tuesday, while Broadcom, which helps Google design its TPUs, jumped 11% Monday and rose another 2% in premarket. The stock movements reflect investor recognition that Google's progress could reshape competitive dynamics in the AI semiconductor market. Despite the competitive threat, NVIDIA's fundamentals remain strong with Q3 fiscal 2026 revenue of $57.01 billion and earnings of $31.77 billion, demonstrating the company's continued ability to capitalize on AI infrastructure buildout. Analyst price targets range from $140 to $352, with an average of $248.42, suggesting confidence in NVIDIA's long-term prospects. However, the emergence of viable alternatives like Google's TPUs introduces new uncertainty around NVIDIA's ability to maintain its current market dominance, particularly as major customers seek supply diversification and cost optimization in their multi-billion dollar AI investments. *** Looking to start your trading day ahead of the curve?
[35]
Meta in talks to spend billions on Google's chips: Report
Meta Platforms is in talks with Google to spend billions of dollars on the Alphabet -owned company's chips for use in its data centers starting from 2027. The talks also involve Meta renting chips from Google Cloud as early as next year and are part of Google's broader push to get customers to adopt its tensor processing units (TPUs) - used for AI workloads - in their own data centers, the report said. Meta Platforms is in talks with Google to spend billions of dollars on the Alphabet -owned company's chips for use in its data centers starting from 2027, The Information reported, a move that would cast Google as a serious rival to semiconductor giant Nvidia . The talks also involve Meta renting chips from Google Cloud as early as next year and are part of Google's broader push to get customers to adopt its tensor processing units (TPUs) - used for AI workloads - in their own data centers, the report said, citing people involved in the talks. The move would mark a departure from Google's current strategy of using TPUs only in its own data centers and could sharply expand the market for its chips, putting the company in direct competition for the hundreds of billions being spent on data-center processors to power AI services. Some Google Cloud executives have suggested the strategy could help it capture as much as 10% of Nvidia's annual revenue, a slice worth billions of dollars, according to the report. Alphabet shares rise, Nvidia declines Alphabet shares rose more than 4% in premarket trading on Tuesday, putting it on course to hit a historic $4 trillion valuation if the gains hold. Broadcom, which helps Google make its AI chips, gained 2%, while Nvidia fell 3.2%. Clinching a chip deal with Meta, one of the biggest Nvidia customers with up to $72 billion planned in spending this year, would mark a major coup for Google -- already one of the biggest winners of the generative AI boom thanks to a surge in demand for its cloud services from businesses adopting the technology. Alphabet, Meta and Nvidia did not immediately respond to requests for comment. Reuters could not verify the report. Demand has surged for custom chips such as TPUs in recent years as businesses look for alternatives to Nvidia's pricey and supply-constrained graphics processors. Anthropic said last month it was expanding its Google deal to use up to one million of the tech giant's AI chips, worth tens of billions of dollars. Google has built momentum in recent months by drawing Warren Buffett's Berkshire Hathaway as an investor, turning its once-marginal cloud unit into a growth engine and earning strong early reviews for its latest Gemini 3 model. Renting Nvidia chips to customers is a big revenue source for its cloud unit. Taking on Nvidia's dominance would require Google to overcome nearly two decades of proprietary Nvidia code that has made the company's ecosystem hard to dislodge. More than 4 million developers worldwide rely on Nvidia's CUDA software platform to build AI and other applications.
[36]
Google TPUs Are 'Cost-Effective Hedge,' Not Replacement For Nvidia, Strategist Says - NVIDIA (NASDAQ:NVDA)
Despite reports of Meta Platforms Inc. (NASDAQ:META) possibly shifting billions in AI hardware spending to Alphabet Inc.'s (NASDAQ:GOOG) (NASDAQ:GOOGL) Google's TPUs, one market strategist argues the move is a defensive tactic against supply shortages, not a sign of Nvidia Corp.'s (NASDAQ:NVDA) ending dominance. Expect 'A Bear Story A Day' James E. Thorne, Chief Market Strategist at Wellington-Altus Private Wealth, dismissed the subsequent drop in Nvidia shares as a typical "bearish hit" in an overheated market. "In today's market a Bear story a day should be expected," Thorne wrote in a post on X on Tuesday. However, Thorne contends the market is misinterpreting the move. He argues that hyperscalers like Meta are turning to Google's TPUs as a "cost-effective hedge" because Nvidia's cutting-edge Blackwell and Rubin GPUs face "long lead times and tight supply." Thorne emphasized that while TPUs offer capacity leverage, they are "not a universal replacement" for Nvidia's ecosystem. He pointed to "HIGH switching costs and software friction" involved in moving away from Nvidia's pervasive CUDA software platform as a major barrier to a "wholesale shift" in the industry. See Also: Alphabet Stock Jumps As Meta Eyes Google AI Chips NVDA Is A 'Generation Ahead,' But 'Delighted' For Google The reaction follows reports that Meta is negotiating to use Google's Tensor Processing Units (TPUs) in its data centers as soon as next year. The news sent Nvidia shares down 2.59% on Tuesday, as investors feared its grip on the AI infrastructure market was slipping. Nvidia publicly responded to the Google reports on Tuesday by stating it was "delighted by Google's success" but simultaneously asserting its own hardware remains "a generation ahead" and more versatile than specialized chips like TPUs. Benzinga has reached out to Google for comment on Thorne's analysis and the competitive landscape, but has not yet received a response. What Do Edge Rankings Say About GOOG And NVDA? NVDA shares have advanced by 32.41% year-to-date and 30.73% over the year. Benzinga's Edge Stock Rankings indicate that it maintains a weaker price trend over the short and medium terms but a strong trend in the long term, with a solid quality ranking. Additional performance details are available here. Meanwhile, Alphabet has gained 69.94% YTD and 91.02% over the year. GOOG maintained a stronger price trend over the short, long, and medium terms, with a strong growth ranking. Additional performance details, as per Benzinga's Edge Stock Rankings, are available here. Read Next: Alphabet Stocks Hits New Highs As Meta Mulls Deploying Google AI Chips In Data Centers Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: JHVEPhoto On Shutterstock.com NVDANVIDIA Corp$176.89-0.52%OverviewGOOGAlphabet Inc$325.700.64%GOOGLAlphabet Inc$325.300.57%METAMeta Platforms Inc$635.80-0.07%Market News and Data brought to you by Benzinga APIs
[37]
Nvidia set to lose $180 billion in market value today as Meta weighs Google chips By Investing.com
Investing.com -- Meta Platforms Inc (NASDAQ:META) is evaluating whether to use Google-designed chips in its data centers, a move that could reshape the competitive landscape in AI hardware, according to The Information. The report said Meta is considering deploying Google's tensor processing units, or TPUs, in its facilities starting in 2027, and may also rent TPUs through Google Cloud as early as next year. In response to this news, NVIDIA (NASDAQ:NVDA) shares fell 4.1% in premarket U.S. trade. This way, Nvidia is on track to shed roughly $180 billion in market value. Get more exclusive insights by leading Wall Street strategists by upgrading to InvestingPro - get 55% off today On the other hand, Google-owner Alphabet (NASDAQ:GOOGL) stock rose 4%, on track to hit the $4 trillion valuation target. Google introduced its first TPU in 2018 for internal cloud workloads and has since rolled out several newer generations tailored for artificial intelligence. The chips are customized to handle the compute-heavy demands of modern AI models, and experts say this specialization gives Google an efficiency edge over rivals. A deal with Meta would mark a significant endorsement of Google's technology and a rare instance of a major AI platform leaning on an external chip supplier outside of Nvidia. Broadcom shares also moved higher on the report, reflecting its exposure to the broader AI infrastructure build-out. Broadcom (NASDAQ:AVGO) rose 11.10% yesterday, and is up a further 2.5% today.
[38]
Google's TPUs Create Another Risk for Nvidia Stock | The Motley Fool
Up until now, Alphabet's (GOOG 0.51%)(GOOGL 0.51%) Google has kept its artificial intelligence hardware efforts to itself. The company has been working on its tensor processing units (TPUs) for nearly a decade, unveiling the first iteration back in 2017. This was well ahead of ChatGPT and the subsequent AI boom that catapulted Nvidia (NASDAQ: NVDA) to the forefront of the AI chip market. Originally, Google's TPUs were designed to accelerate computations used by the company's various services. TPUs were later made available to Google Cloud customers for running AI workloads. Google's TPUs are application-specific integrated circuits (ASICs), which are designed at the hardware level to perform specific tasks efficiently. This contrasts with Nvidia's GPUs, which are more general-purpose processors. Nvidia has had to contend with tech giants like Alphabet, Amazon, and Microsoft designing their own AI chips and installing them in their own data centers, but that hasn't stopped the company from dominating the market for AI accelerators. There's competition from AMD, but Nvidia and its proprietary CUDA software have been impossible to beat. Google is now reportedly eyeing Nvidia's massive market share. According to The Information, Google is talking with potential customers about deploying its TPUs. Meta Platforms is a potential TPU customer, with a multi-billion-dollar deal reportedly being discussed. Google's ambitions appear to be ramping up, with Google Cloud executives reportedly seeing an opportunity to capture 10% of Nvidia's annual revenue. That would translate into many billions of dollars in new revenue and make a significant dent in Nvidia's dominance. Because Google's TPUs are ASICs, they can achieve a level of efficiency that is likely to be appealing to customers who are building massive AI data centers with enormous energy requirements. Google's latest Ironwood TPUs are twice as power-efficient as their predecessors and 30 times more power-efficient than the first TPUs made available through Google Cloud in 2018. The downside is that Google's TPUs are a different architecture from Nvidia's industry-standard GPUs. Customers who are already heavily invested in Nvidia's GPUs and the CUDA ecosystem will face an uphill battle to adopt TPUs. A company like Meta has the resources to make it happen if the benefits are significant enough, so Google is likely to be limited to the largest tech giants as potential TPU customers. The biggest threat facing Nvidia is the potential for the AI boom to turn into an AI bust. Outside of that, competition is likely to gradually erode the company's dominance. AMD has been making some inroads, and now Google is directly targeting Nvidia's largest customers. It will take quite a while for any of these threats to show up in Nvidia's results. The company's cloud GPUs are currently sold out, with customers making multi-year plans to build massive AI data centers using its chips. For the time being, Nvidia's dominance will remain intact. In the longer term, once the frantic rush to build out capacity has run its course, efficiency will reign supreme. Google's TPUs could be an attractive alternative to NVIDIA's GPUs when energy efficiency is more important than raw performance, especially considering that power production is a limiting factor for tech giants building out AI data centers. Google's apparent shift in AI hardware strategy represents a meaningful long-term risk for Nvidia. With nearly a decade of iteration under its belt, Google and its TPUs could eventually steal away a meaningful chunk of market share from Nvidia.
[39]
Nvidia share price: NVDA stock crash in pre-market as Google's big move may lead to 'Godzilla vs. Kong' moment in AI industry
Nvidia stock today: Nvidia shares fell as Meta Platforms considers using Google's AI chips. Meta may rent these chips from Google Cloud next year. This move signals a significant shift in the AI chip market. Google's Tensor Processing Units are gaining traction. Meta plans substantial AI spending, potentially benefiting Google Cloud. This development impacts major tech players and their suppliers. Nvidia stock today: Nvidia shares slipped sharply on Tuesday after a report suggested that Meta Platforms is in talks to spend billions of dollars on Google's AI chips, signalling a potential showdown between two of the industry's most powerful forces, a "Godzilla vs. Kong" moment for the AI chip world. According to The Information, Meta is discussing plans to use Google's tensor processing units (TPUs) in its data centers starting in 2027. The company may also rent the chips from Google's cloud division as early as next year. The talks underscore Google's accelerating push to challenge Nvidia's dominance in AI accelerators, a market where Nvidia's GPUs have long been the undisputed gold standard. The report sent Nvidia shares down more than 3% in pre-market trading, while Alphabet surged more than 2%, extending its recent rally driven by excitement around the latest version of its Gemini AI model. ALSO READ: Mortgage rates today, Nov 25: 30-year falls to 6.24%, 15-year drops to 5.37% & refinance rates dipped too - key takeaways for buyers and homeowners For Google, securing Meta, one of the world's biggest spenders on AI and data infrastructure, would represent a major victory. The tech giant has been steadily advancing its TPU strategy, recently signing a deal to supply up to 1 million chips to Anthropic, a move analysts have described as a major validation of Google's hardware ambitions. Seaport analyst Jay Goldberg called the Anthropic deal a "really powerful validation" for TPUs, saying, "A lot of people were already thinking about it, and a lot more people are probably thinking about it now," as quoted by Bloomberg. Bloomberg Intelligence analysts Mandeep Singh and Robert Biggar noted that Meta's enormous capex plans, at least $100 billion in 2026, imply spending $40-50 billion on inferencing-chip capacity as early as next year, as per the report. That could accelerate chip consumption and backlog growth for Google Cloud as enterprises increasingly seek access to TPUs and Gemini-based AI services. ALSO READ : US stock market bubble alert: Why permabear Albert Edwards says tech & AI rally could end in tears Shares of Alphabet-linked suppliers jumped across Asia following the report. South Korea's IsuPetasys surged 18% to a record, while Taiwan's MediaTek rose nearly 5%. Google's TPUs, first developed more than a decade ago for AI workloads, have gained traction as companies look for alternatives amid global concerns about overreliance on Nvidia. While Nvidia's GPUs were originally designed for graphics rendering, their ability to process massive datasets made them the default choice for training modern AI models. TPUs, by contrast, are application-specific chips built expressly for AI and machine learning tasks, and their tight integration with Google's Gemini and DeepMind teams has helped refine the technology further. Why did Nvidia stock drop today? Shares fell over 3% after reports that Meta may use Google's AI chips. How much does Meta plan to spend on AI next year? Meta is expected to spend at least $100 billion in 2026, with $40-50 billion on inferencing-chip capacity. (You can now subscribe to our Economic Times WhatsApp channel)
[40]
Nvidia defends its dominant position as Google advances in AI chips
Nvidia reinstated its technological superiority on Tuesday, after a 3% drop in its share price triggered by rumors of a partnership between Google (Alphabet) and Meta around Tensor Processing Unit (TPU) AI chips. In a post on X, the company welcomed Google's progress while insisting it remains "one generation ahead" of the rest of the industry thanks to its GPUs, which can run all artificial intelligence models across all computing infrastructures. The company remains largely dominant in the AI chip market, with a market share of over 90%, thanks to its GPUs. However, the rise of Google's specialized chips, used internally and available via Google Cloud, is attracting attention. The potential adoption of TPUs by Meta, a major Nvidia customer, is fueling questions about how the competitive landscape may evolve. Nvidia emphasizes the versatility of its GPUs compared with ASIC chips, which are more limited in their uses.
[41]
Google DeepMind Scientist Slams Market After Nvidia, AMD Stocks Tumble: 'Selloff Shows How Clueless The Market Is' - Broadcom (NASDAQ:AVGO), Advanced Micro Devices (NASDAQ:AMD)
On Tuesday, a sharp dip in Nvidia Corp (NASDAQ:NVDA) and Advanced Micro Devices, Inc. (NASDAQ:AMD) sparked unexpected pushback from Google DeepMind researcher arguing the market still doesn't grasp the reality of exploding demand for AI hardware. DeepMind Researcher Says Wall Street Misread AI Hardware Demand Nvidia fell 2.59% to close at $177.82 on Tuesday, while AMD slid 4.15% to $206.13. In the after hours trading, the Jensen Huang-led tech giant dropped another 0.52% while its rival decreased by 1.04%, according to Benzinga Pro. The drop followed reports that Meta Platforms Inc. (NASDAQ:META) may tap Alphabet Inc.'s (NASDAQ:GOOG) (NASDAQ:GOOGL) Google's AI chips for its data centers. The report suggested that Google is ramping up its push to go head-to-head with Nvidia in the AI chip space. Its tensor processing units, or TPUs, are built to deliver optimized performance and cost efficiency across AI workloads, from training to inference. That market's reaction drew a rebuke from a Google DeepMind researcher Amit Yazdan on X. The researcher who works on TPU design said, "The selloff shows how clueless the market is about hardware and the demand." See Also: Eddie Wu Doesn't See An AI Bubble For The Next 3 Years: Alibaba CEO Doesn't 'See Much Of An Issue' Nvidia Plays Diplomat -- And Then Flexes The selloff came just after Nvidia publicly congratulated Google on its AI progress while also reinforcing its own lead. In a social media post, the company said it was "delighted" to see Google's success and noted it continues to supply the search giant with GPUs. Nvidia then underscored its competitive edge, saying it remains "a generation ahead" of rivals and is the only platform capable of running every major AI model wherever computing happens. Meanwhile, Jim Cramer said Broadcom Inc. (NASDAQ:AVGO) is likely to benefit the most from a potential Google-Meta AI chip deal, pointing to CEO Hock Tan's position on Meta's board and noting that Meta would most likely contract with Broadcom. Benzinga's Edge Stock Rankings show that GOOGL continues to exhibit a robust price trend across short, mid and long-term timeframes. Click here to compare its performance with industry peers. Read Next: David Tepper's Hedge Funds Bets On AMD, Nvidia In Q3, Takes Profits On Intel Photo Courtesy: Thrive Studios ID on Shutterstock.com Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. AMDAdvanced Micro Devices Inc$203.99-1.04%OverviewAVGOBroadcom Inc$387.110.54%GOOGAlphabet Inc$325.700.64%GOOGLAlphabet Inc$325.300.57%METAMeta Platforms Inc$635.80-0.07%NVDANVIDIA Corp$176.89-0.52%Market News and Data brought to you by Benzinga APIs
[42]
Meta reportedly in talks to buy billions-worth of Google chips By Investing.com
Investing.com -- Meta Platforms is discussing a potential multi-billion dollar deal with Google to purchase chips for its data centers beginning in 2027, according to a Tuesday report by The Information. The talks also include the possibility of Meta renting chips from Google Cloud as early as next year, according to people involved in the discussions. This arrangement is part of Google's larger initiative to encourage customers to adopt its tensor processing units (TPUs), which are specialized for AI workloads, in their own data centers. Can Alphabet shares continue higher in 2026? See what Wall Street analysts think by upgrading to InvestingPro - get 55% off today If finalized, the deal would represent a strategic shift for Google, which has historically used its TPUs exclusively in its own data centers. This expansion could significantly broaden the market for Google's chips and position the company as a direct competitor to Nvidia in the lucrative data-center processor market that powers AI services. Some Google Cloud executives believe this strategy could help the company capture as much as 10% of Nvidia's annual revenue, which would translate to billions of dollars, the report noted.
[43]
Meta Is Reportedly Exploring a Massive AI Chip Deal. Is This Good News for Its Stock? | The Motley Fool
A potential move from Meta to diversify away from Nvidia's chips could wind up powering wins for the social-media giant's long-term investors. Meta Platforms (META +3.82%) stock saw substantial gains in Tuesday's trading following news of a potentially disruptive artificial intelligence supplier deal. The company's share price rose 3.8% in the daily session. Meta stock gained ground today following a report that the company was on track to purchase artificial intelligence (AI) chips from Alphabet. The move could mark a significant shakeup in the AI hardware space. The Information published a report yesterday stating that Meta Platforms is on track to purchase tensor processing units (TPUs) from Alphabet. If so, it could signal some major shifts in the AI processor market. Meta and Alphabet are major competitors when it comes to AI, content, and digital advertising markets, and reports that the two companies could enter into a partnership on AI processors have some significant implications for the broader tech-hardware market. According to The Information's report, Meta is on track to make a multi-billion-dollar order for Alphabet's TPUs. Nvidia's graphic processing units (GPUs) have been the most important semiconductor hardware when it comes to powering the training and execution of AI applications, but leading cloud computing giants have some big incentives to diversify their processing stacks. Meta Platforms is one of Nvidia's largest customers. According to some reports, the social-media giant is the AI hardware leader's second-largest purchaser - trailing behind only Microsoft. Given its heavy reliance on Nvidia's processors, Meta has some big incentives to diversify its processing hardware. Meta has been developing its owner processors, but the rumored purchase of Alphabet's TPUs suggests that its rival's tech currently offers better performance. Meta will likely continue to be a major purchaser of Nvidia's GPUs, but it also has major reasons to diversify its hardware foundations in the AI space. If Meta really is poised to make major purchases of Alphabet's TPUs, that suggests that it can derive more cost-effective performance from the hardware for some applications compared to Nvidia's high-end processors. Meta's potential move to diversifying its AI-processing tech stack could also soften Nvidia's pricing power. Nvidia has been able to command stellar premiums when it comes to top-of-the-line AI processors, and increased competition in the category could weaken the company's stranglehold and pricing premiums. If Meta is really betting big on new AI processing alternatives, it looks like a favorable development for the company.
[44]
Meta in talks to spend billions on Google's chips, The Information reports
(Reuters) -Facebook parent Meta is in discussions with Alphabet's Google to spend billions on using Google's AI chips in its data centers from 2027 and to rent chips from Google Cloud by next year, The Information reported on Monday. Google has pitched tensor processing units, or TPUs, as a cheaper alternative to Nvidia chips, useful for firms seeking higher security standards, the report said, adding that Google has discussed aiming for 10% of Nvidia's revenue with its TPU chip business. The TPUs, available for rent on Google Cloud, serve as an alternative to supply-constrained Nvidia chips. Meta, Google and Nvidia did not immediately respond to requests for comment. Reuters could not immediately verify the report. Meta announced earlier this year it will invest $600 billion in U.S. infrastructure and jobs over the next three years, including AI data centers. The company has been one of Nvidia's biggest customers since 2022, amassing an arsenal of graphics processing units to train its models and also serve the more than 3 billion people who use its apps each day. (Reporting by Rajveer Singh Pardesi in Bengaluru; Editing by Mrigank Dhaniwala)
[45]
Google touts its TPUs as alternative to Nvidia AI chips; Meta expresses interest - The Economic Times
Google is intensifying its effort to challenge Nvidia in the AI chip market, according to a report from The Information on Monday. Historically, Google has used its custom tensor processing units (TPUs) inhouse to run its own cloud infrastructure, offering them to customers only through Google Cloud for large AI workloads. The report says Google is now proposing a significant shift, allowing customers to install TPUs within their own data centres rather than relying solely on its facilities. And social media giant Meta is emerging as a major potential buyer. Meta Platforms Inc., the owner of Facebook and Instagram, is reportedly in talks to spend billions of dollars to deploy Google's TPUs in its data centres beginning 2027. Meta is also said to be considering renting TPU capacity from Google Cloud as early as next year. Currently, Meta's AI systems primarily run on Nvidia GPUs. Notably, Meta is invested in the race to superintelligence, spending billions on acqui-hiring and talent acquisitions through other channels. If finalised, a deal with Meta would mark a major milestone for Google's hardware strategy. The company has been pitching TPUs to organisations with strict data security and regulatory requirements, such as large financial firms and high-frequency trading companies. The tech giant has been marketing tighter control over sensitive information with on-premises TPU deployment to potential clients. Google Cloud executives have indicated internally that broader TPU adoption could enable Google to capture a meaningful share of the AI chip market from Nvidia. According to the report, they believe the effort could help Google target up to 10% of Nvidia's annual revenue, representing several billion dollars. The push comes as demand for AI computing capacity continues to surge and Nvidia remains the dominant supplier in the sector. By offering TPUs both through the cloud and directly inside customer facilities, Google is signalling a more aggressive approach in the expanding competition for AI infrastructure.
[46]
Nvidia Is A 'Generation Ahead,' But 'Delighted' For Google - NVIDIA (NASDAQ:NVDA)
Nvidia Corp. (NASDAQ:NVDA), the leader in AI hardware, issued a masterclass in competitive positioning on Tuesday by simultaneously praising rival Google while boldly reaffirming its own technological supremacy. NVDA stock is down. See the real-time price action here. In a social media post, Nvidia's Newsroom said that it is "delighted by Google's success" and acknowledges the "great advances in AI" made by Alphabet, Inc. (NASDAQ:GOOG) (NASDAQ:GOOGL), while at the same time noting that it remains a key supplier to Google. Read Next: Alphabet Stock Is Extremely Overbought: Is A Google Pullback Coming? "We're delighted by Google's success -- they've made great advances in AI and we continue to supply to Google," Nvidia said. However, the tone quickly shifts from partner to pioneer. Nvidia unequivocally asserts that it is "a generation ahead of the industry." "NVIDIA is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done. NVIDIA offers greater performance, versatility, and fungibility than ASICs, which are designed for specific AI frameworks or functions," the company said. The comparison, which pits Nvidia's flexible GPUs against Google's specialized tensor processing units (TPUs) -- classified by Nvidia as application-specific integrated circuits (ASICs) -- takes on new relevance in light of recent reports. Social media giant Meta Platforms, Inc. (NASDAQ:META), currently one of Nvidia's largest GPU customers, is reportedly in talks to spend billions on Google's TPUs for its data centers. The potential pivot by a major hyperscaler signals a genuine appetite for alternatives. "We are experiencing accelerating demand for both our custom TPUs and Nvidia GPUs," a Google spokesperson said in a statement, according to CNBC. "We are committed to supporting both, as we have for years." Nvidia's rebuttal is clear: TPUs may be cheaper for certain, narrow tasks, but Nvidia's platform remains the universal standard and the most flexible tool for the entire spectrum of global AI development and deployment. Read Next: Nuclear Stock Meltdown Continues For Oklo, NuScale, Nano Image created using artificial intelligence via DALL-E. NVDANVIDIA Corp$177.56-2.73%OverviewGOOGAlphabet Inc$322.561.28%GOOGLAlphabet Inc$322.321.17%METAMeta Platforms Inc$634.433.49% This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[47]
Meta, Google discuss TPU deal as Google targets Nvidia's lead, Information says By Investing.com
Investing.com -- Google is sharply escalating its bid to rival Nvidia in the AI chip race, and Meta is emerging as a potential multibillion-dollar customer, The Information reported Monday evening. For years, Google has limited its custom tensor processing units (TPUs) to its own cloud data centers, renting them out to companies running large-scale AI workloads. But according to The Information, Google is now pitching the chips for deployment inside customers' own data centers, marking a major shift in strategy. One of those customers is Meta Platforms Inc (NASDAQ:META). The parent of Facebook and Instagram is reportedly in discussions to spend billions of dollars to integrate Google's TPUs into its data centers starting in 2027, while also planning to rent TPU capacity from Google Cloud as early as next year. Meta currently relies primarily on Nvidia GPUs for its AI infrastructure. Alphabet Inc (NASDAQ:GOOGL) stock rose 2.1% in after-hours trading following the announcement, while NVIDIA Corporation (NASDAQ:NVDA) stock slumped 1.8% If the deal proceeds, it would be a significant validation for Google's hardware ambitions. The company has told prospective clients -- ranging from high-frequency trading firms to large financial institutions, that installing TPUs on-premises can help them meet stringent security and compliance requirements for sensitive data, The Information reports. The stakes are enormous. Executives inside Google Cloud have suggested that expanding TPU adoption could help the company capture up to 10% of Nvidia's annual revenue, a haul worth billions. With demand for AI compute exploding and Nvidia continuing to dominate the supply chain, Google's play to put TPUs directly into customers' facilities signals a more aggressive phase in the AI chip wars.
[48]
Nvidia Sinks After Threat From Alphabet Emerges -- Is the Stock a Buy Now? | The Motley Fool
It has been well-documented that large hyperscalers, such as Alphabet, are developing their own chips, which Google has named Tensor Processing Units (TPUs). However, until now, Google had reportedly only utilized them in its own data centers. This is the first time the company has considered selling these chips to other hyperscalers. Nvidia's graphics processing units (GPUs) are used for more general-purpose training of large language models (LLMs), while TPUs are preferred for more specialized training of individualized tasks. The Information also reported that Google sees a real business with its TPUs and is approaching other cloud customers in a play that could target as much as 10% of Nvidia's yearly revenue. Following this report, investors are asking broader questions about how wide of a moat Nvidia truly has. If Google can do this, then other large hyperscalers like Microsoft and Amazon may be able to as well. I think this is certainly an event worth monitoring because it threatens potential commoditization of the chip industry, or rising competition in the space that could eventually erode Nvidia's incredible margins. However, I don't think there is enough evidence yet to suggest this is a meaningful blow to Nvidia, which is now rolling out new chip models every one to two years. Furthermore, there is likely to be a significant long-term demand for AI, as more parts of the economy begin to leverage the technology. Consequently, Nvidia's GPUs are still likely to have a long runway ahead. Continue to monitor competition, but I still view the stock as a long-term buy.
[49]
The One AI Risk Nvidia Bulls Keep Pretending Isn't Real - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL), NVIDIA (NASDAQ:NVDA)
Everyone on Wall Street has the same Nvidia Corp (NASDAQ:NVDA) debate -- "how big is AI demand?" -- and almost no one is asking the only question that actually matters: how long can Nvidia keep taxing hyperscalers at 70%+ margins before they revolt? Because the real threat to Nvidia isn't a competing GPU. It's Alphabet Inc's (NASDAQ:GOOG) (NASDAQ:GOOGL) Google's TPU -- and what it represents: the moment hyperscalers stop outsourcing the most profitable part of AI. Track NVDA stock here. TPUs Aren't Trying To Beat Nvidia -- They're Trying To Own The Margin Google isn't scaling TPUs to win a hardware beauty contest -- it's scaling them to stop wiring billions in compute spend to Nvidia every quarter. TPUs let Google run AI on its terms, on its infrastructure, at its costs. And hyperscalers have finally internalized the Apple Inc (NASDAQ:AAPL) doctrine: platform owners shouldn't pay suppliers premium margins forever. So TPUs don't need to match GPUs. They only need to hit "good enough" for large, in-house workloads -- at a fraction of Nvidia's price. That's how pricing power erodes in slow motion. One training job at a time. Read Also: Meta's AI Spending Broke The Stock -- But The Bulls May Be Ready To Return If Google Goes First, Everyone Goes This is the risk Nvidia bulls keep waving away. The moment hyperscalers know that custom silicon structurally improves gross margins, none of them will volunteer to remain the company still paying Nvidia full freight. Amazon.com Inc (NASDAQ:AMZN) has Trainium and Inferentia. Meta Platforms Inc (NASDAQ:META) has MTIA. Microsoft Corp (NASDAQ:MSFT) is funding Maia. The trend isn't speculative -- it's already happening. Nobody wants to be the last one paying the GPU toll. Nvidia doesn't need to lose compute share to lose margin leadership. It only needs hyperscalers to build alternatives just credible enough to set the price ceiling. Investor Takeaway The AI demand story is bulletproof. The AI pricing power story is not. Nvidia isn't at risk of becoming obsolete -- it's at risk of becoming negotiable. And once hyperscalers have real leverage, "70% margins forever" stops being a religion and starts being a memory. Read Next: Nvidia Is Becoming The Operating System Of AI: 'We Run Everything,' Jensen Says Image: Shutterstock GOOGAlphabet Inc$323.041.43%OverviewGOOGLAlphabet Inc$322.691.29%NVDANVIDIA Corp$175.61-3.80%AAPLApple Inc$277.940.73%AMZNAmazon.com Inc$230.301.78%METAMeta Platforms Inc$631.252.97%MSFTMicrosoft Corp$476.920.62%Market News and Data brought to you by Benzinga APIs
[50]
Nvidia vs Google: Why Jensen Huang is attacking 'inflexible' TPUs
Google's Ironwood optical chips threaten Nvidia's monopoly on AI scaling The mask has finally slipped, and for the first time in the generative AI era, the undisputed king of hardware looks a little rattled. Usually, when competitors like AMD or Intel announce "Nvidia killers," Nvidia CEO Jensen Huang responds with silence or perhaps a benchmark chart that politely ends the conversation. But this week, following reports that Meta (one of Nvidia's biggest customers) is in talks to lease Google's custom chips, the reaction was different. It was loud, it was public, and it was defensive. Nvidia's official response didn't just tout its own speeds and feeds. It explicitly attacked Google's approach, dismissing their chips as inflexible "ASICs" while branding its own GPUs as the only "fungible" currency of the AI world. Reading between the lines of that statement, one thing is clear: the era of the polite monopoly is over. The street fight has begun. Also read: OpenAI vs Google: Why Sam Altman fears ChatGPT might be losing the AI race While he didn't say it in as many words, Nvidia's focus on "fungibility" is a tacit admission that the raw performance gap is closing. NVIDIA is arguing that their GPUs are like cash, you can spend them on anything. If the AI hype cycle shifts from Large Language Models (LLMs) to something else next year, a warehouse full of Nvidia H100s can be repurposed for video rendering, simulation, or drug discovery. By contrast, they are painting Google's Tensor Processing Units (TPUs) as gift cards - incredibly valuable, but only usable at one specific store. Because Google's chips are ASICs (Application-Specific Integrated Circuits) built strictly for today's AI math, Nvidia implies they are a risky bet in a volatile market. It's a smart, logical argument. But the fact that the $4 trillion giant feels the need to make it publicly proves they are worried that customers like Meta are starting to do the math, and finding that specialization might be worth the risk. Nvidia has a right to be sweaty. Google's AI hardware journey, which started with some rocky experiments years ago, has matured into a genuine threat. Also read: Beyond ChatGPT: 'Godmother of AI's bold bet on spatial intelligence with World Labs The new 7th-generation "Ironwood" TPU isn't just catching up; in some ways, it has changed the game. While Nvidia wins on raw muscle per chip, Google has mastered the art of the swarm. Their "Ironwood" pods use optical interconnects (lasers) to link over 9,000 chips into a single, massive super-brain. For a company like Meta, which needs to train models on trillions of parameters, that kind of friction-free scaling is dangerously attractive. If OpenAI vs Google is the software battle, Nvidia vs Google is the hardware battle of 2025. Ultimately, what we are seeing is the "frenemy" dynamic of Silicon Valley reaching its breaking point. Nvidia is "delighted" by Google's success in public, but in private, they know that every dollar Meta spends on a Google TPU is a dollar denied to the Nvidia ecosystem. Jensen Huang knows that he can't win on price, Google's vertical integration makes their chips cheaper to operate. So, he is doubling down on fear. He is betting that Big Tech is too afraid of the unknown to abandon the safety of the Nvidia "standard." But if this week's defensive posturing is any indication, even Nvidia knows that "safety" might not be enough to hold off the hungry, specialized wolves at the door forever.
Share
Share
Copy Link
Meta is reportedly in advanced talks to adopt Google's Tensor Processing Units (TPUs) for its AI infrastructure, marking a potential shift away from Nvidia's dominant GPU platform. The deal could involve billions in spending and represents growing competition in the AI chip market.
Meta is reportedly in advanced discussions to spend billions of dollars on Google's custom Tensor Processing Units (TPUs), marking what could be a significant shift in the social media giant's AI infrastructure strategy
1
. The proposed deal would involve Meta initially renting Google Cloud TPUs in 2026, followed by outright purchases beginning in 20274
.
Source: New York Post
This potential partnership represents a departure from Google's historical practice of using TPUs exclusively for internal operations. Some Google Cloud executives believe the Meta deal could generate revenue equivalent to as much as 10% of Nvidia's current annual data center business, which generated over $51 billion in Q2 2025 alone
1
.The reports triggered significant market volatility, with Nvidia shares tumbling 5.3% on Tuesday, erasing nearly $250 billion in market value and marking the company's biggest intraday retreat since April
3
. The sell-off rippled through the broader tech ecosystem, affecting Nvidia partners including Super Micro Computer, which fell 3.1%, and Oracle, which dropped 3.4%3
.
Source: Digit
Conversely, Alphabet's shares rose 1.3% to a fresh record high, pushing the company close to a $4 trillion market capitalization for the first time
3
. Nvidia has now lost more than $800 billion in market value since peaking just above $5 trillion less than a month ago3
.Nvidia issued a pointed response on social media, stating: "We're delighted by Google's success -- they've made great advances in AI and we continue to supply to Google. NVIDIA is a generation ahead of the industry -- it's the only platform that runs every AI model and does it everywhere computing is done"
1
.Google's current-generation TPU v5p features 95 gigabytes of HBM3 memory and delivers bfloat16 peak throughput exceeding 450 TFLOPS per chip
1
. TPU v5p pods can contain nearly 9,000 chips and are designed to scale efficiently within Google Cloud's infrastructure using toroidal mesh connections via optical circuit switch technology2
.
Source: ET
By comparison, Nvidia's Hopper-based H100 GPU includes 80 billion transistors, 80 gigabytes of HBM3 memory, and delivers up to 4 PFLOPS of AI performance using FP8 precision
1
. The successor Blackwell-based GB200 increases HBM capacity to 192 gigabytes and peak throughput to around 20 PFLOPS1
.Related Stories
Despite the technical capabilities, Meta would face significant integration challenges in adopting TPUs. The chips are programmed via Google's XLA compiler stack, which serves as the backend for frameworks like JAX and TensorFlow, requiring developers to adopt specific libraries and compilation patterns
1
.This contrasts sharply with Nvidia's broader ecosystem built around CUDA, cuDNN, TensorRT, and related developer tools that form the default substrate for large-scale AI development
1
.Additionally, TPU deployments use completely different architecture from traditional GPU clusters, employing optical circuit switches rather than packet switches, which often requires different programming models
2
.Summarized by
Navi
[1]
[2]
[4]
1
Technology

2
Technology

3
Policy and Regulation
