SK hynix begins mass production of 192GB SOCAMM2 memory for NVIDIA Vera Rubin AI platform

4 Sources

Share

SK hynix has started mass production of its 192GB SOCAMM2 memory module targeting NVIDIA's Vera Rubin AI platform. Built on LPDDR5X technology and 1c-class DRAM process, the module delivers more than double the bandwidth and over 75% improved power efficiency compared to conventional RDIMM modules, addressing critical memory bottlenecks in AI server deployments.

SK hynix Launches Advanced Memory Solution for AI Infrastructure

SK hynix has begun mass production of its 192GB SOCAMM2 memory module, marking a significant development in AI server deployments as the industry shifts toward more efficient hardware architectures

1

. The new module is specifically designed for NVIDIA Vera Rubin, the company's next-generation AI computing platform, and represents a fundamental shift in how memory solutions are optimized for artificial intelligence workloads

2

.

Source: Wccftech

Source: Wccftech

Built using LPDDR5X memory and manufactured with a 1c-class DRAM process—the sixth generation of 10nm technology—the module delivers performance gains that address critical bottlenecks in AI systems. According to SK hynix, the 192GB SOCAMM2 memory module provides more than double the bandwidth of conventional RDIMM modules while achieving over 75% improved power efficiency

3

. These improvements are particularly vital for large language model training and inference operations involving hundreds of billions of parameters, where memory throughput often becomes a limiting factor.

Understanding the SOCAMM2 Architecture

The Small Outline Compression Attached Memory Module 2 represents a departure from traditional server memory designs. Unlike standard RDIMM configurations, SOCAMM2 employs a compact form factor combined with a compression-attached design that enhances signal integrity

1

. This compression connector not only stabilizes signal transmission but also enables easier module replacement and upgrades in data center environments

4

.

The use of LPDDR technology, traditionally associated with low-power mobile DRAM in smartphones, reflects a broader industry trend toward optimizing performance per watt rather than pursuing raw speed alone. By adapting LPDDR for server environments, SK hynix addresses both thermal and space efficiency challenges that plague next-generation AI data centers

2

. The slim profile and high-density design enable increased compute density without proportionally escalating power and cooling requirements.

Source: TweakTown

Source: TweakTown

Addressing Memory Bottlenecks in AI Workloads

Memory bottlenecks have emerged as a critical constraint in AI infrastructure, particularly as models scale to unprecedented sizes. SK hynix expects the new module to fundamentally resolve these bottlenecks during both training and inference phases of large language models

3

. The improved bandwidth reduces latency in data movement between memory and processing units, while enhanced energy efficiency challenges help control overall system power consumption—a growing concern as AI operations expand globally.

In typical AI server configurations, SOCAMM2 is positioned on the system board next to logic chips such as GPUs or CPUs, complementing high-bandwidth memory (HBM) that sits within the processor package. While HBM accelerates computing operations, SOCAMM2 improves system-level power efficiency by replacing less efficient DDR-based memory modules

4

. This creates a multi-tier memory hierarchy consisting of HBM, SOCAMM2, DDR5 modules, and CXL memory serving as expanded memory.

Source: Korea Times

Source: Korea Times

Strategic Partnership and Market Positioning

The collaboration between SK hynix and NVIDIA highlights the trend toward tightly integrated hardware ecosystems where memory, compute, and interconnect components work as unified systems. NVIDIA will source SOCAMM2 from all three major memory manufacturers—SK hynix, Samsung, and Micron—to maintain a diversified supply chain capable of meeting demand as Vera Rubin takes center stage in the agentic AI era

3

.

Justin Kim, President and Head of AI Infra at SK hynix, stated: "By supplying the 192GB SOCAMM2, SK hynix has established a new standard for AI memory performance. We will solidify our position as the most trusted AI memory solution provider, through close collaboration with our global AI customers"

2

. The company has stabilized mass production ahead of schedule and will begin supplying modules to NVIDIA at the end of this month, demonstrating its commitment to meeting Cloud Service Provider demands.

As the AI market continues its shift from inference to training operations, SOCAMM2 gains attention as a solution capable of running large language models with substantially reduced power consumption. This positions SK hynix to capture a significant share of the expanding AI memory market while addressing the dual challenges of performance and sustainability that define modern data center operations.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo