2 Sources
2 Sources
[1]
A deeper look at the tightened chipmaking supply chain, and where it may be headed in 2026 -- "nobody's scaling up," says analyst as industry remains conservative on capacity
We sit down with two high-profile analysts to get some answers about where the chipmaking industry might be headed next. From memory shortages to rising GPU prices, 2025 seemed like a year of significant scarcity in the supply chain for all things semiconductors. But what does the future hold for this super-tight market in the years to come? One school of thought suggests that in a couple of years, the story goes, today's hyperscaler accelerators will spill out into the secondary market in a crypto-style deluge. Cheap ex-A100s and B200s - which could be considered AI factory cast-offs -will suddenly become available for everyone else looking to buy. Data center hardware is often assumed to have a finite and sometimes short lifecycle, with depreciation schedules and refresh cycles that push older hardware into uselessness after a few years. But another group suggests AI compute doesn't behave like a consumer GPU market, and the 'three years and it's done' assumption is shakier than many people want to admit. As Stacy Rasgon, managing director and senior analyst at Bernstein, said in an interview with Tom's Hardware Premium, the idea that "they disintegrate after three years, and they're no good, is bullshit." Some believe the present tightness in the market isn't just a temporary crunch but is more a structural condition of the new post-AI norm in the market, with a closed loop where state-of-the-art hardware circulates between a handful of cloud and AI giants. So what's the reality? Ben Bajarin, an analyst at Creative Strategies, describes the current moment as a "gigacycle" rather than another chip boom. In his modelling, global semiconductor revenues climb from roughly $650 billion in 2024 to more than $1 trillion by the end of the decade. "There's some catch-up necessary, but there's also the fact that the semiconductor industry remains relatively conservative, because they are typically cyclical," Bajarin said in an interview with Tom's Hardware Premium. "So everybody's very concerned about overcapacity." That conservatism matters because chipmaking capacity takes time, effort, and a lot of money to stand up and bring online. It's for that reason that we're likely to see tightness in the market remaining for a little while yet: demand is spiking, yes, but companies aren't that keen to stand up their supply until they can absolutely guarantee a return. "They don't want to be stuck with foundry capacity or supply capacity that they can't use seven or eight years from now," Bajarin said. According to Bajarin's analysis, AI chips represented less than 0.2% of wafer starts in 2024, yet already generated roughly 20% of semiconductor revenue - a huge concentration on a single space, which helps explain why the shortages feel different from the pandemic-era GPU crunch. In 2020 and 2021, consumer demand surged, and supply chains seized up, but the underlying products were still relatively mass market in manufacturing terms. But today's AI accelerators require leading-edge logic, exotic memory stacks, and advanced packaging. It's possible to make more of them, but not quickly, and not without knock-on effects. "If you look at the forecasts for wafer capacity or substrate capacity, nobody's scaling up," cautions Bajarin. Rasgon told us that while not everything is tight, the "really tight" parts of the system are concentrated in memory. Rasgon pointed to Micron, one of the three global DRAM giants, which has said memory tightness could persist beyond 2026, driven in large part by AI demand and High Bandwidth Memory (HBM). It's notable that Micron recently closed down its consumer-facing business, Crucial, to focus on the more lucrative products it can sell - and markets it can sell into. HBM is a different manufacturing and packaging challenge that can hoover up production capacity. HBM production consumes far more wafer resources than standard DRAM, according to Rasgon - so much so that producing a gigabyte of HBM can take "three or four times as many wafers" as producing a gigabyte of DDR5, which means shifting capacity into HBM effectively reduces the total number of DRAM bits the industry can supply. Memory makers prioritising HBM for accelerators doesn't just affect hyperscalers. It has a knock-on effect on PCs, servers, and other devices when standard DRAM is tighter and pricier than it would otherwise be, which is why companies have been pushing up prices for consumer hardware in recent weeks and months. Hyperscalers can often swallow higher component costs because they monetise the compute directly, whether through internal workloads or rented-out inference. Everyone else tends to feel the squeeze more immediately: OEMs and system builders face higher bill-of-materials costs and retail pricing changes for the worse if you're an end customer. Bajarin believes HBM will be one of the defining constraints of the remainder of the decade, projecting it to grow fourfold to more than $100 billion by 2030, while noting that HBM3E can require about three times the wafer supply per gigabyte compared with DDR5. But he's not alone in thinking that: Micron has even talked about being unable to meet all demand from key customers, suggesting it can supply only around half to two-thirds of expected demand, even while raising capex and considering new projects. There are a number of reasons for the current tightness, but even if the market had infinite wafers and infinite memory, it could still run into a chokepoint: advanced packaging. The industry has been ramping up its CoWoS (chip-on-wafer-on-substrate) capacity aggressively, but it has also been unusually open about how hard it is to get ahead of demand. In early 2025, Nvidia CEO Jensen Huang said overall advanced packaging capacity had quadrupled in under two years but was still a bottleneck for the firm. It's not just Nvidia that is reckoning with the challenge. TrendForce, which tracks the space closely, has projected TSMC's CoWoS capacity rising to around 75,000 wafers per month in 2025 and reaching roughly 120,000 to 130,000 wafers per month by the end of 2026. Such growth is a big leap - but it's also unlikely to loosen current capacity constraints. Bajarin highlighted the reason why in his analysis: CapEx by the top four cloud providers -- Amazon, Google, Microsoft, and Meta -- doubled to roughly $600 billion annually in just two years. Rasgon noted that some companies can wind up supply-constrained for reasons that have nothing to do with leading-edge demand being "off the charts." In Intel's case, he argued, it's partly about where demand is versus where capacity has been cut. "They were actually scrapping tools in that older generation and selling them off for pennies on the dollar," he said. Although it may seem obvious that demand will continue to grow because of the way the big tech companies are splashing the cash, it can be difficult to accurately forecast future demand because of the way the chip market works. Rasgon said semiconductor companies sit "at the back of the supply chain," which limits their ability to see end demand clearly, and encourages behaviour that makes the signal noisier. It's a vicious circle exacerbated when supply is particularly tight and lead times stretch because customers start hoarding the chips they have and double-ordering new options, because they're trying to secure parts from anywhere they can. That can make demand look artificially huge until lead times ease and cancellations begin. Suppliers want to avoid being caught by sizing their production for a demand that doesn't materialize. "Forecasting in semiconductors in general is an unsolved problem," Rasgon said. "My general belief is that most, or frankly all, semiconductor management's actual visibility of what is going on with demand is precisely zero." Bajarin points out that the industry works "very methodically" because it remembers boom-bust cycles, especially in memory. "We're just going to have to live in a foreseeable cycle of supply tightness because of these dynamics that are historically true of the semiconductor industry," he said. When the market unwinds itself, and supply normalizes is, as is befitting a market that struggles with forecasting, impossible to tell. "As long as we're in this cycle where we're really building out a fundamental new infrastructure around AI, it's going to remain supply-constrained for the foreseeable future, if not through this entire cycle, just because of prior boom-bust cycles within the semiconductor industry," said Bajarin. But even if the industry could magically print accelerators, it still needs somewhere to run them. Data centers take time to build, to connect to power, and to cool at scale. "Even if we make all these GPUs, we can't really house them because we don't have the gigawatts," said Bajarin. However, with the planning of small modular reactors and the expansion of electricity grids, there's hope that the need can be met - eventually. Another potential unblocker of the tight market is the opportunity to reuse older generation GPUs as they enter the traded market, thanks to newer generations of chips constantly being cycled through big tech companies who want the cutting edge for inference and training. Already, you can find small volumes of older data center GPUs on the market, including listings for Nvidia A100-class hardware through resellers and brokers. But the odd older generation of chips existing is a world away from a crypto-style glut of former AI accelerators appearing on the market. While AI firms want all the cutting-edge chips they can get their hands on, they're not necessarily disposing of their older stock, either. A top-end accelerator isn't a consumer graphics card that becomes obsolete in a couple of years. It's capital equipment. And AI companies are learning to sweat it. OpenAI CFO Sarah Friar underlined this in November, admitting that OpenAI still uses Nvidia's Ampere chips - released in 2020 - for inference on its consumer-facing models. Training might use bleeding-edge tech, but inference can profitably run on older generations for a long time. If OpenAI is thinking that way, so too will other companies in the space. "Absolutely, the older stuff is still being used," says Rasgon. "And in fact, not only is it being used, it's being used very, very profitably." For now, the clearest takeaway is that the current tightness isn't only about making more chips. It's about whether the industry can build enough of everything around them - including the buildings, cooling, and grid connections to run them - fast enough to match demand that's still accelerating. "We're going to remain in a relative supply constraint across all of these vectors until either we've built the entire thing out and we have enough compute," Bajarin said, "or it's a bubble, and it crashes."
[2]
The end of the general-purpose GPU: Why Nvidia's $20B Groq bet is a survival play for the disaggregated era
Nvidia's $20 billion strategic licensing deal with Groq represents one of the first clear moves in a four-front fight over the future AI stack. 2026 is when that fight becomes obvious to enterprise builders. For the technical decision-makers we talk to every day -- the people building the AI applications and the data pipelines that drive them -- this deal is a signal that the era of the one-size-fits-all GPU as the default AI inference answer is ending. We are entering the age of the disaggregated inference architecture, where the silicon itself is being split into two different types to accommodate a world that demands both massive context and instantaneous reasoning. Why inference is breaking the GPU architecture in two To understand why Nvidia CEO Jensen Huang dropped one-third of his reported $60 billion cash pile on a licensing deal, you have to look at the existential threats converging on his company's reported 92% market share. The industry reached a tipping point in late 2025: For the first time, inference -- the phase where trained models actually run -- surpassed training in terms of total data center revenue, according to Deloitte. In this new "Inference Flip," the metrics have changed. While accuracy remains the baseline, the battle is now being fought over latency and the ability to maintain "state" in autonomous agents. There are four fronts of that battle, and each front points to the same conclusion: Inference workloads are fragmenting faster than GPUs can generalize. 1. Breaking the GPU in two: Prefill vs. decode Gavin Baker, an investor in Groq (and therefore biased, but also unusually fluent on the architecture), summarized the core driver of the Groq deal cleanly: "Inference is disaggregating into prefill and decode." Prefill and decode are two distinct phases: * The prefill phase: Think of this as the user's "prompt" stage. The model must ingest massive amounts of data -- whether it's a 100,000-line codebase or an hour of video -- and compute a contextual understanding. This is "compute-bound," requiring massive matrix multiplication that Nvidia's GPUs are historically excellent at. * The generation (decode) phase: This is the actual token-by-token "generation." Once the prompt is ingested, the model generates one word (or token) at a time, feeding each one back into the system to predict the next. This is "memory-bandwidth bound." If the data can't move from the memory to the processor fast enough, the model stutters, no matter how powerful the GPU is. (This is where Nvidia was weak, and where Groq's special language processing unit (LPU) and its related SRAM memory, shines. More on that in a bit.) Nvidia has announced an upcoming Vera Rubin family of chips that it's architecting specifically to handle this split. The Rubin CPX component of this family is the designated "prefill" workhorse, optimized for massive context windows of 1 million tokens or more. To handle this scale affordably, it moves away from the eye-watering expense of high bandwidth memory (HBM) -- Nvidia's current gold-standard memory that sits right next to the GPU die -- and instead utilizes 128GB of a new kind of memory, GDDR7. While HBM provides extreme speed (though not as quick as Groq's static random-access memory (SRAM)), its supply on GPUs is limited and its cost is a barrier to scale; GDDR7 provides a more cost-effective way to ingest massive datasets. Meanwhile, the "Groq-flavored" silicon, which Nvidia is integrating into its inference roadmap, will serve as the high-speed "decode" engine. This is about neutralizing a threat from alternative architectures like Google's TPUs and maintaining the dominance of CUDA, Nvidia's software ecosystem that has served as its primary moat for over a decade. All of this was enough for Baker, the Groq investor, to predict that Nvidia's move to license Groq will cause all other specialized AI chips to be canceled -- that is, outside of Google's TPU, Tesla's AI5, and AWS's Trainium. 2. The differentiated power of SRAM At the heart of Groq's technology is SRAM. Unlike the DRAM found in your PC or the HBM on an Nvidia H100 GPU, SRAM is etched directly into the logic of the processor. Michael Stewart, managing partner of Microsoft's venture fund, M12, describes SRAM as the best for moving data over short distances with minimal energy. "The energy to move a bit in SRAM is like 0.1 picojoules or less," Stewart said. "To move it between DRAM and the processor is more like 20 to 100 times worse." In the world of 2026, where agents must reason in real-time, SRAM acts as the ultimate "scratchpad": a high-speed workspace where the model can manipulate symbolic operations and complex reasoning processes without the "wasted cycles" of external memory shuttling. However, SRAM has a major drawback: it is physically bulky and expensive to manufacture, meaning its capacity is limited compared to DRAM. This is where Val Bercovici, chief AI officer at Weka, another company offering memory for GPUs, sees the market segmenting. Groq-friendly AI workloads -- where SRAM has the advantage -- are those that use small models of 8 billion parameters and below, Bercovici said. This isn't a small market, though. "It's just a giant market segment that was not served by Nvidia, which was edge inference, low latency, robotics, voice, IoT devices -- things we want running on our phones without the cloud for convenience, performance, or privacy," he said. This 8B "sweet spot" is significant because 2025 saw an explosion in model distillation, where many enterprise companies are shrinking massive models into highly efficient smaller versions. While SRAM isn't practical for the trillion-parameter "frontier" models, it is perfect for these smaller, high-velocity models. 3. The Anthropic threat: The rise of the 'portable stack' Perhaps the most under-appreciated driver of this deal is Anthropic's success in making its stack portable across accelerators. The company has pioneered a portable engineering approach for training and inference -- basically a software layer that allows its Claude models to run across multiple AI accelerator families -- including Nvidia's GPUs and Google's Ironwood TPUs. Until recently, Nvidia's dominance was protected because running high-performance models outside of the Nvidia stack was a technical nightmare. "It's Anthropic," Weka's Bercovici told me. "The fact that Anthropic was able to ... build up a software stack that could work on TPUs as well as on GPUs, I don't think that's being appreciated enough in the marketplace." (Disclosure: Weka has been a sponsor of VentureBeat events.) Anthropic recently committed to accessing up to 1 million TPUs from Google, representing over a gigawatt of compute capacity. This multi-platform approach ensures the company isn't held hostage by Nvidia's pricing or supply constraints. So for Nvidia, the Groq deal is equally a defensive move. By integrating Groq's ultra-fast inference IP, Nvidia is making sure that the most performance-sensitive workloads -- like those running small models or as part of real-time agents -- can be accommodated within Nvidia's CUDA ecosystem, even as competitors try to jump ship to Google's Ironwood TPUs. CUDA is the special software Nvidia provides to developers to integrate GPUs. 4. The agentic 'statehood' war: Manus and the KV Cache The timing of this Groq deal coincides with Meta's acquisition of the agent pioneer Manus just two days ago. The significance of Manus was partly its obsession with statefulness. If an agent can't remember what it did 10 steps ago, it is useless for real-world tasks like market research or software development. KV Cache (Key-Value Cache) is the "short-term memory" that an LLM builds during the prefill phase. Manus reported that for production-grade agents, the ratio of input tokens to output tokens can reach 100:1. This means for every word an agent says, it is "thinking" and "remembering" 100 others. In this environment, the KV Cache hit rate is the single most important metric for a production agent, Manus said. If that cache is "evicted" from memory, the agent loses its train of thought, and the model must burn massive energy to recompute the prompt. Groq's SRAM can be a "scratchpad" for these agents -- although, again, mostly for smaller models -- because it allows for the near-instant retrieval of that state. Combined with Nvidia's Dynamo framework and the KVBM, Nvidia is building an "inference operating system" that can tier this state across SRAM, DRAM, and other flash-based offerings like that from Bercovici's Weka. Thomas Jorgensen, senior director of Technology Enablement at Supermicro, which specializes in building clusters of GPUs for large enterprise companies, told me in September that compute is no longer the primary bottleneck for advanced clusters. Feeding data to GPUs was the bottleneck, and breaking that bottleneck requires memory. "The whole cluster is now the computer," Jorgensen said. "Networking becomes an internal part of the beast ... feeding the beast with data is becoming harder because the bandwidth between GPUs is growing faster than anything else." This is why Nvidia is pushing into disaggregated inference. By separating the workloads, enterprise applications can use specialized storage tiers to feed data at memory-class performance, while the specialized "Groq-inside" silicon handles the high-speed token generation. The verdict for 2026 We are entering an era of extreme specialization. For decades, incumbents could win by shipping one dominant general-purpose architecture -- and their blind spot was often what they ignored on the edges. Intel's long neglect of low-power is the classic example, Michael Stewart, managing partner of Microsoft's venture fund M12, told me. Nvidia is signaling it won't repeat that mistake. "If even the leader, even the lion of the jungle will acquire talent, will acquire technology -- it's a sign that the whole market is just wanting more options," Stewart said. For technical leaders, the message is to stop architecting your stack like it's one rack, one accelerator, one answer. In 2026, advantage will go to the teams that label workloads explicitly -- and route them to the right tier: * prefill-heavy vs. decode-heavy * long-context vs. short-context * interactive vs. batch * small-model vs. large-model * edge constraints vs. data-center assumptions Your architecture will follow those labels. In 2026, "GPU strategy" stops being a purchasing decision and becomes a routing decision. The winners won't ask which chip they bought -- they'll ask where every token ran, and why.
Share
Share
Copy Link
The semiconductor industry confronts structural tightness through 2026 as demand for AI chips outpaces conservative capacity expansion. Nvidia's $20 billion licensing deal with Groq marks a strategic pivot from general-purpose GPUs to specialized inference architectures, while High Bandwidth Memory shortages ripple across the entire chipmaking supply chain, affecting everything from data centers to consumer PCs.
The semiconductor industry faces a structural supply crunch that analysts predict will persist well beyond 2026, driven by explosive demand for AI chips and conservative capacity planning. Ben Bajarin, an analyst at Creative Strategies, describes the current moment as a "semiconductor market gigacycle" rather than a typical boom cycle, with global semiconductor revenues projected to climb from roughly $650 billion in 2024 to more than $1 trillion by the end of the decade
1
. Despite surging demand, the industry remains cautious about expanding production. "If you look at the forecasts for wafer capacity or substrate capacity, nobody's scaling up," Bajarin warns1
.
Source: Tom's Hardware
High Bandwidth Memory has emerged as one of the defining constraints of the chipmaking supply chain. Stacy Rasgon, managing director at Bernstein, identifies memory as the "really tight" component, with Micron indicating that memory tightness could persist beyond 2026, driven largely by AI demand
1
. The production challenge is staggering: manufacturing a gigabyte of HBM requires "three or four times as many wafers" as producing a gigabyte of DDR5, effectively reducing the total DRAM supply available to the market1
. This shift toward HBM for AI accelerators creates ripple effects across the entire tech ecosystem, driving up prices for consumer hardware and standard server components. Bajarin projects HBM will grow fourfold to more than $100 billion by decade's end1
.Nvidia's $20 billion strategic licensing deal with Groq represents a fundamental shift in the AI inference landscape, marking the end of the general-purpose GPU era. Jensen Huang deployed one-third of his reported $60 billion cash pile on this licensing agreement as inference surpassed training in total data center revenue for the first time in late 2025, according to Deloitte
2
. The deal addresses an existential challenge: inference workloads are fragmenting into two distinct phases—prefill and decode—that require fundamentally different architectures. Nvidia's upcoming Vera Rubin chip family includes the Rubin CPX component optimized for massive context windows using 128GB of GDDR7 memory, while the Groq-flavored silicon will serve as a high-speed decode engine2
.
Source: VentureBeat
Related Stories
Groq's Language Processing Unit leverages SRAM technology that fundamentally differs from the DRAM and HBM found in traditional GPUs. Michael Stewart, managing partner of Microsoft's M12 venture fund, explains that SRAM's energy efficiency is transformative: "The energy to move a bit in SRAM is like 0.1 picojoules or less. To move it between DRAM and the processor is more like 20 to 100 times worse"
2
. This advantage proves critical for the decode phase of inference, where models generate tokens one at a time in a memory-bandwidth-bound process. The disaggregated era demands specialized silicon for different workloads, threatening CUDA's dominance while forcing hyperscalers to rethink their infrastructure strategies.AI chips represented less than 0.2% of wafer starts in 2024 yet generated roughly 20% of semiconductor revenue, creating unprecedented concentration in a single market segment
1
. This concentration explains why current GPU capacity constraints feel different from pandemic-era shortages. Today's AI accelerators require leading-edge logic, exotic memory stacks, and advanced packaging that cannot be scaled quickly. The industry's conservatism stems from valid concerns about overcapacity and foundry capacity utilization. Companies fear being "stuck with foundry capacity or supply capacity that they can't use seven or eight years from now," according to Bajarin1
. This supply chain bottleneck creates a closed loop where state-of-the-art hardware circulates primarily between cloud giants and hyperscalers, fundamentally altering market access for smaller players and enterprise builders.Summarized by
Navi
24 Dec 2025•Business and Economy

19 Mar 2025•Technology

07 Apr 2025•Technology

1
Technology

2
Business and Economy

3
Technology
