Samsung's HBM3E Memory Breakthrough: Nvidia Certification Boosts AI Chip Competition

Reviewed byNidhi Govil

6 Sources

Share

Samsung Electronics has successfully passed Nvidia's strict qualification tests for its 12-layer HBM3E memory chips, marking a significant comeback in the AI hardware race. This development has boosted investor confidence and set the stage for future competition in HBM4 technology.

Samsung's HBM3E Breakthrough

Samsung Electronics has achieved a significant milestone in the AI hardware race by successfully passing Nvidia's strict qualification tests for its 12-layer HBM3E (High Bandwidth Memory) chips

1

2

. This development comes after 18 months of efforts and multiple certification delays, marking a significant comeback for the Korean tech giant in the competitive semiconductor industry

1

.

Source: TweakTown

Source: TweakTown

Market Impact and Competition

The news of Samsung's certification led to an immediate 5% jump in the company's stock price, reflecting increased investor confidence

1

5

. Samsung is now poised to challenge the dominance of SK hynix and Micron, who were previously the exclusive suppliers of HBM3E chips to Nvidia

3

.

Source: Analytics Insight

Source: Analytics Insight

Supply and Implementation

Samsung has secured a contract to supply approximately 10,000 units of its qualified HBM3E 12-Hi product to Nvidia

3

. These chips are expected to be used in Nvidia's DGX B300 cards and AMD's Instinct MI350 accelerators

1

2

. However, Samsung's supply to Nvidia is expected to remain limited in the near term, as it is the third company to gain clearance after SK hynix and Micron

4

.

Technical Specifications

The HBM3E chips comprise 12 layers, up from eight in standard HBM3, resulting in an impressive bandwidth of 1.2 TB/s per stack

1

. This high-performance memory is crucial for powering advanced AI models and accelerators in data centers.

Future Prospects: HBM4

While the HBM3E certification is a significant achievement, the industry is already looking ahead to HBM4, the next generation of high-bandwidth memory

1

3

. HBM4 is expected to double the bus width, potentially achieving 2 TB/s of bandwidth per stack. Nvidia is pushing for speeds of 10 to 11 GBps per pin, significantly higher than the JEDEC standard of 8 Gbps

1

.

Source: TechSpot

Source: TechSpot

Samsung is actively preparing for the HBM4 race, with plans to deliver its first HBM4 samples to Nvidia for validation later this month

2

. The company aims to begin volume production as early as the first half of 2026

1

.

This development not only strengthens Samsung's position in the AI hardware market but also intensifies the competition among memory manufacturers, potentially driving further innovation in high-performance computing and artificial intelligence technologies.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo