3 Sources
3 Sources
[1]
Micron enters high-volume production of HBM4 for Nvidia Vera Rubin - 2.3x bandwidth improvement and 20% boost in power efficiency
Micron has announced that it has entered high-volume production of its HBM4 36GB 12-Hi memory, designed for Nvidia's Vera Rubin GPU platform. Making the announcement at GTC 2026, the memory giant simultaneously confirmed high-volume production of the industry's first PCIe 6.0 data center SSD and a new SOCAMM2 module, making it the first memory supplier to bring all three products to volume shipment for the Vera Rubin ecosystem at the same time. The HBM4 36GB 12H stack runs at over 11 Gb/s pin speeds, delivering bandwidth greater than 2.8 TB/s. Compared to Micron's HBM3E at the same 36GB 12H configuration, that represents a 2.3 times bandwidth increase alongside more than 20% improvement in power efficiency, according to Micron's internal power calculator data. "The next era of AI will be defined by tightly integrated platforms developed through joint engineering innovations across the ecosystem. Our close collaboration with NVIDIA ensures that compute and memory are designed to scale together from day one," said Sumit Sadana, executive vice president and chief business officer at Micron Technology, in a press release. "With HBM4 36GB 12H, alongside the industry's first SOCAMM2 and Gen6 SSD now in high-volume production, Micron's memory and storage form a core foundation that unlocks the full potential of next-generation AI." Micron has also shipped samples of a 48GB 16H HBM4 stack to customers. The additional four die layers give the 16H configuration a 33% capacity increase per HBM placement over the 36GB 12H product, a milestone that points toward denser configurations in future AI accelerator generations. Last month, the company announced that the 9650 SSD had entered mass production, marking the first time that a PCIe 6.0 SSD had entered that stage of production. The drive supports up to 28 GB/s sequential read throughput and 5.5 million random read IOPS, doubling PCIe 5.0 read performance at 100% higher performance per watt. Unsurprisingly, it targets AI inference, training, and agentic workloads in liquid-cooled environments and is optimized for Nvidia's BlueField-4 STX reference architecture. Meanwhile, the 192GB SOCAMM2 module is designed for Nvidia Vera Rubin NVL72 systems and standalone Vera CPU platforms, with Micron's SOCAMM2 portfolio spanning 48GB to 256GB capacities. The Vera Rubin platform supports up to 2TB of memory and 1.2 TB/s of bandwidth per CPU using the module. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[2]
Micron Announces Production Ramp for HBM4, 192GB SOCAMM2 and 9650 Gen 6 SSD
Micron used NVIDIA GTC 2026 to outline a broad production ramp across three major product categories tied to next-generation AI infrastructure. Rather than focusing on a single component launch, the company announced that HBM4 memory, SOCAMM2 modules, and its 9650 PCIe Gen 6 data center SSD are all now moving through volume or mass production, with the overall rollout closely aligned with NVIDIA's Vera Rubin platform and related system designs. The most important announcement is Micron's HBM4 progress. The company said its 36GB 12-high stack entered volume shipment in the first quarter of 2026 and is designed for NVIDIA Vera Rubin accelerators. Micron rates the stack at more than 11Gb/s per pin and over 2.8TB/s of bandwidth, while also claiming an efficiency gain of more than 20 percent over HBM3E. For AI hardware, that combination matters because bandwidth and power efficiency directly affect how much useful work an accelerator can sustain in dense server environments. Micron also disclosed that it has shipped early customer samples of a higher-capacity 16-high 48GB variant, which increases memory capacity per HBM placement by roughly one-third. At the system memory level, Micron confirmed that its 192GB SOCAMM2 modules are now in high-volume production. These modules are intended for Vera Rubin NVL72 systems and standalone Vera CPU platforms, with Micron claiming up to 2TB of memory capacity and 1.2TB/s of bandwidth per CPU. The broader SOCAMM2 family is expected to span capacities from 48GB to 256GB, giving server builders flexibility depending on workload class and platform configuration. In practical terms, SOCAMM2 is meant to help keep pace with the growing memory demands of AI systems where host-side bandwidth and capacity are becoming more important alongside the accelerator-attached HBM pool. Micron also updated the storage side of its roadmap with the 9650 PCIe Gen 6 SSD, which the company says has entered mass production. Designed for NVIDIA BlueField-4 STX architecture, the drive is rated for up to 28GB/s sequential reads and as much as 5.5 million random read IOPS. Micron says that translates into roughly double the read performance of its previous Gen 5 model, while also improving performance per watt by a factor of two. Those gains are relevant for data center deployments where storage throughput has to scale alongside faster compute and larger memory footprints. What makes the overall announcement noteworthy is the way Micron is presenting these products as parts of the same infrastructure generation. HBM4 targets the accelerator memory tier, SOCAMM2 addresses CPU-attached system memory, and the 9650 extends bandwidth at the storage layer. That is a more complete message than simply claiming higher speed in one segment. AI servers increasingly require balanced subsystem design, and bottlenecks can appear anywhere from on-package memory to host memory to storage. Micron's strategy here is clear enough. As Vera Rubin-era platforms approach deployment, the company wants to establish itself across the full memory and storage stack rather than compete in a single niche. If the published specifications translate well into shipping systems, these components will form part of the core hardware foundation for AI servers entering service through 2026.
[3]
Micron confirms HBM4 memory and PCIe Gen6 SSDs are in 'high-volume' production
TL;DR: Micron has begun high-volume production of AI-optimized HBM4 memory and PCIe Gen6 SSDs for NVIDIA Vera Rubin platforms, delivering up to 2.8 TB/s bandwidth, 20% better power efficiency, and double Gen5 SSD performance. These innovations enable scalable, high-capacity, and efficient AI computing solutions. At NVIDIA GTC 2026 in San Jose, California, Micron has announced that new AI-optimized memory and storage technologies for the upcoming NVIDIA Vera Rubin platform and other AI systems are now in "high-volume" production. This includes HBM4 36GB 12H memory, which Micron says achieves over 11 Gb/s per pin, opening the door to over 2.8 TB/s of memory bandwidth. According to the company, this is a 2.3X increase in memory bandwidth and a 20% improvement in power efficiency over HBM3E, which is still one of the fastest, highest-capacity high-bandwidth memory solutions for AI available today. And with advanced packaging, Micron is already shipping HBM4 48GB 16H samples to customers. HBM4 is built for NVIDIA Vera Rubin and the immediate future of cutting-edge AI, but Micron has also announced solutions designed for NVIDIA Vera Rubin NVL72 and standalone NVIDIA Vera CPU systems with Micron SOCAMM2, offering up to 2TB of memory and 1.2 TB/s of bandwidth per CPU, with capacities ranging from 48GB to 256GB. In addition, Micron has announced it's mass-producing PCIe Gen6 data center SSDs optimized for low latency, speed, efficiency, and "liquid-cooled environments" with 28 GB/s sequential read throughput and 5.5 million random read IOPS. This new Micron 9650 data center SSD is reportedly twice as fast as Gen5, delivering 100% higher performance per watt. "The next era of AI will be defined by tightly integrated platforms developed through joint engineering innovations across the ecosystem. Our close collaboration with NVIDIA ensures that compute and memory are designed to scale together from day one," said Sumit Sadana, executive vice president and chief business officer at Micron Technology. "At the heart of this is Micron's HBM4, the engine of AI, delivering unprecedented bandwidth, capacity, and power efficiency. With HBM4 36GB 12H, alongside the industry's first SOCAMM2 and Gen6 SSD now in high-volume production, Micron's memory and storage form a core foundation that unlocks the full potential of next-generation AI."
Share
Share
Copy Link
Micron has entered high-volume production of HBM4 36GB 12H memory for Nvidia's Vera Rubin GPU platform, delivering over 2.8 TB/s bandwidth—a 2.3x improvement over HBM3E. Announced at GTC 2026, the company simultaneously began shipping PCIe Gen6 SSDs and SOCAMM2 modules, becoming the first supplier to deliver all three products for the Vera Rubin ecosystem at scale.
Micron has entered high-volume production of its HBM4 36GB 12H memory, marking a significant milestone in AI infrastructure development as the industry prepares for Nvidia's Vera Rubin platform. Announced at GTC 2026 in San Jose, California, the memory giant simultaneously confirmed it has achieved mass production across three critical product categories: HBM4 memory, SOCAMM2 modules, and PCIe Gen6 SSDs, making it the first memory supplier to bring all three products to volume shipment for the Nvidia Vera Rubin platform ecosystem at the same time
1
3
.
Source: Tom's Hardware
The HBM4 36GB 12H configuration runs at over 11 Gb/s pin speeds, delivering memory bandwidth greater than 2.8 TB/s. Compared to Micron's HBM3E at the same 36GB 12H configuration, this represents a 2.3 times bandwidth improvement alongside more than 20% improvement in power efficiency, according to Micron's internal power calculator data
1
. For AI accelerators operating in dense server environments, this combination of bandwidth and power efficiency directly affects how much useful work can be sustained without hitting thermal or power constraints2
.Micron has already shipped samples of a 48GB 16H stack to customers, adding four additional die layers to the standard configuration. This higher-capacity variant increases memory capacity per HBM placement by approximately 33% over the 36GB 12H product, a milestone that points toward denser configurations in future AI accelerator generations
1
2
. The 48GB 16H stack signals Micron's roadmap for scaling memory capacity as AI models continue to grow in size and complexity.At the system memory level, Micron confirmed that its 192GB SOCAMM2 modules are now in high-volume production. These modules target Vera Rubin NVL72 systems and standalone Vera CPU platforms, supporting up to 2TB of memory capacity and 1.2 TB/s of bandwidth per CPU
1
3
. The broader SOCAMM2 family spans capacities from 48GB to 256GB, giving server builders flexibility depending on workload class and platform configuration. In AI computing solutions where host-side bandwidth and capacity are becoming increasingly important alongside accelerator-attached HBM pools, SOCAMM2 modules help address growing memory demands2
.Related Stories
Micron also updated the storage side of its roadmap with the 9650 PCIe Gen6 SSD, which entered mass production last month. Designed for Nvidia's BlueField-4 STX reference architecture, the drive supports up to 28 GB/s sequential read throughput and 5.5 million random read IOPS, doubling PCIe 5.0 read performance while delivering 100% higher performance per watt
1
3
. The drive targets AI inference, training, and agentic workloads in liquid-cooled environments, addressing data center deployments where storage throughput must scale alongside faster compute and larger memory footprints2
.
Source: Guru3D
What distinguishes this announcement is Micron's coordinated approach to subsystem design. HBM4 targets the accelerator memory tier, SOCAMM2 modules address CPU-attached system memory, and the 9650 extends bandwidth at the storage layer. This represents a more complete infrastructure strategy than competing in a single niche, as AI servers increasingly require balanced design to avoid bottlenecks that can appear anywhere from on-package memory to host memory to storage
2
.
Source: TweakTown
"The next era of AI will be defined by tightly integrated platforms developed through joint engineering innovations across the ecosystem. Our close collaboration with NVIDIA ensures that compute and memory are designed to scale together from day one," said Sumit Sadana, executive vice president and chief business officer at Micron Technology
1
. As Vera Rubin-era platforms approach deployment through 2026, Micron's strategy positions the company across the full memory and storage stack, forming a core foundation for next-generation AI servers entering service this year2
.Summarized by
Navi
[1]
02 Jan 2026•Technology

10 Feb 2026•Technology

11 Jun 2025•Technology

1
Technology

2
Technology

3
Business and Economy
