Nvidia Collaborates with Major Memory Makers on New SOCAMM Format for AI Servers

2 Sources

Nvidia partners with Samsung, SK Hynix, and Micron to develop SOCAMM, a new proprietary memory format for AI servers, offering higher performance and efficiency compared to traditional memory modules.

News article

Nvidia Introduces SOCAMM: A New Memory Standard for AI Servers

In a significant development for the AI hardware industry, Nvidia has partnered with major memory manufacturers Samsung, SK Hynix, and Micron to create a new proprietary memory format called SOCAMM (Small Outline Compression Attached Memory Module). This collaboration, unveiled at Nvidia GTC 2025, aims to enhance the performance and efficiency of AI servers 12.

SOCAMM: Technical Specifications and Advantages

SOCAMM is based on LPDDR5X technology and is designed specifically for Nvidia's Grace Blackwell platform. The new memory format offers several advantages over traditional memory modules like RDIMMs and MRDIMMs:

  1. Higher bandwidth: SOCAMM delivers more than 2.5 times the bandwidth of RDIMMs at the same capacity 2.
  2. Lower power consumption: It uses only one-third of the power compared to regular DDR5 DIMMs 2.
  3. Smaller footprint: The compact 14x90mm design occupies just one-third the size of standard RDIMM form factors 12.
  4. Improved efficiency: SOCAMM is optimized for efficient server layouts and thermal management 1.

Micron's SOCAMM Implementation

Micron has taken the lead in SOCAMM production, announcing that it will be the first to ship these products in volume. Key features of Micron's SOCAMM modules include:

  1. 128GB capacity: Achieved through 16-die stacks of LPDDR5X memory 2.
  2. Compatibility: Designed specifically for the Nvidia GB300 Grace Blackwell Ultra Superchip 12.
  3. AI optimization: Tailored for training large AI models and supporting more concurrent users on inference workloads 2.

Industry Impact and Future Prospects

The introduction of SOCAMM represents a significant shift in the AI hardware landscape:

  1. Exclusive to Nvidia: SOCAMM is specific to Nvidia's AI architecture and cannot be used in AMD or Intel systems 1.
  2. Market positioning: SK Hynix is positioning SOCAMM as a key offering for future AI infrastructure 1.
  3. Production plans: While Micron has already begun volume production, SK Hynix plans to start mass production "in line with the market's emergence" 1.

Expert Opinions

Industry leaders have expressed optimism about the potential of SOCAMM:

Raj Narasimhan, SVP at Micron, stated, "AI is driving a paradigm shift in computing, and memory is at the heart of this evolution. Micron's contributions to the Nvidia Grace Blackwell platform yield performance and power-saving benefits for AI training and inference applications" 12.

Juseon Kim, President at SK Hynix, commented, "We are proud to present our line-up of industry-leading products at GTC 2025. With a differentiated competitiveness in the AI memory space, we are on track to bring our future as the Full Stack AI Memory Provider forward" 1.

As the AI industry continues to evolve rapidly, SOCAMM represents a significant advancement in memory technology, potentially reshaping the landscape of AI server architecture and performance.

Explore today's top stories

Disney and Universal Sue Midjourney for AI-Generated Character Copyright Infringement

Disney and NBCUniversal have filed a landmark lawsuit against AI image-synthesis company Midjourney, accusing it of copyright infringement for allowing users to create images of copyrighted characters like Darth Vader and Shrek.

Ars Technica logoWired logoNew Scientist logo

53 Sources

Technology

1 day ago

Disney and Universal Sue Midjourney for AI-Generated

Google Appoints Koray Kavukcuoglu as Chief AI Architect to Accelerate AI-Powered Product Development

Google creates a new executive position, Chief AI Architect, appointing Koray Kavukcuoglu to lead AI-powered product development and integration across the company.

Reuters logoCNBC logoEconomic Times logo

4 Sources

Technology

1 day ago

Google Appoints Koray Kavukcuoglu as Chief AI Architect to

Meta's V-JEPA 2: A Leap Forward in AI's Understanding of the Physical World

Meta unveils V-JEPA 2, an advanced AI model designed to help machines understand and predict physical world interactions, potentially revolutionizing robotics and autonomous systems.

TechCrunch logoCNET logoCNBC logo

8 Sources

Technology

1 day ago

Meta's V-JEPA 2: A Leap Forward in AI's Understanding of

Oracle's Cloud Business Soars on AI Demand, Projecting Strong Growth for FY 2026

Oracle reports impressive Q4 results and forecasts accelerated cloud growth, driven by increasing demand for AI-related services. The company's strategic partnerships and investments in AI infrastructure position it as a major player in the cloud computing market.

Bloomberg Business logoReuters logoSiliconANGLE logo

12 Sources

Business and Economy

1 day ago

Oracle's Cloud Business Soars on AI Demand, Projecting

Multiverse Computing Raises $217M for Revolutionary AI Model Compression Technology

Spanish AI startup Multiverse Computing secures $217 million in funding to advance its quantum-inspired AI model compression technology, promising to dramatically reduce the size and cost of running large language models.

Reuters logoCrunchbase News logoSiliconANGLE logo

5 Sources

Technology

8 hrs ago

Multiverse Computing Raises $217M for Revolutionary AI
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Twitter logo
Instagram logo
LinkedIn logo