Samsung and SK Hynix win exclusive HBM4 supply for Nvidia's Vera Rubin AI accelerator

3 Sources

Share

Samsung Electronics and SK Hynix have been confirmed as the sole HBM4 suppliers for Nvidia's Vera Rubin AI accelerator, excluding Micron Technology from the flagship platform. SK Hynix will provide roughly 70 percent of the allocation while Samsung holds approximately 30 percent, marking a significant shift in the AI memory market as demand for high-bandwidth memory intensifies.

Samsung and SK Hynix Secure Exclusive HBM4 Supply Deal

Samsung Electronics and SK Hynix have been confirmed as the exclusive HBM4 suppliers for Nvidia's Vera Rubin AI accelerator, effectively sidelining Micron Technology from the flagship platform

1

. The Korea Economic Daily reported on March 8, citing industry sources, that the two South Korean chipmakers have secured the most lucrative segment of the AI memory market. The supplier split heavily favors SK Hynix with roughly 70 percent of Nvidia's HBM4 allocation, while Samsung holds approximately 30 percent

1

. An industry source told the Korea Economic Daily that "Micron isn't even being discussed as a Vera Rubin HBM4 supplier," highlighting the competitive divide in the supply chain

1

.

Source: Benzinga

Source: Benzinga

Technical Requirements Drive Supplier Selection

The selection of Samsung and SK Hynix as exclusive HBM4 suppliers reflects their ability to meet Nvidia's stringent technical requirements for the next-generation AI platform. Samsung passed Nvidia qualification tests for both 10 Gbps and 11 Gbps HBM4 variants and began limited shipments in February

1

. SK Hynix is currently optimizing its product for the 11 Gbps test and expects full-scale production this month. Nvidia required memory speeds above the 8 Gbps JEDEC industry standard for Vera Rubin, setting a high bar for potential suppliers

1

. Industry analysts attributed Micron Technology's exclusion to difficulties meeting speed, base die design, and thermal performance requirements, despite the company currently supplying HBM3E for Nvidia's existing platforms

1

.

Vera Rubin Platform Specifications and Timeline

The Vera Rubin AI accelerator will incorporate 8 HBM4 stacks per GPU totaling 288 GB per GPU, representing a substantial leap in memory capacity for advanced AI processors

1

. The full Vera Rubin Superchip combines two GPUs for 576 GB of total memory, designed to handle increasingly complex AI workloads. The platform is slated for release in the second half of 2026, with Nvidia expected to formally unveil Vera Rubin at the GTC developer conference on March 16

1

. While Micron is expected to supply HBM4 for mid-tier Rubin accelerators such as the Rubin CPX, it will not participate in the top-tier Vera Rubin supply chain

1

.

Pricing Power Reflects AI Hardware Demand

The allocation marks a significant turnaround for Samsung after previous yield struggles and reinforces SK Hynix's position as Nvidia's primary memory partner

1

. Samsung has begun mass production of its HBM4 AI memory chips and is reportedly negotiating prices of about $700 per unit, roughly 20 percent to 30 percent higher than the previous generation

2

. This pricing power reflects strong demand and tight supply in the AI memory market as AI-driven demand continues to reshape the high-bandwidth memory landscape

2

. The decision gives South Korea's two memory chipmakers a fresh advantage in the race to supply premium AI components

3

. TrendForce noted Nvidia may eventually add all three suppliers to its HBM4 ecosystem, though the confirmed vendor list indicates Samsung and SK Hynix will dominate the initial and most profitable ramp

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo