SK hynix Leads the Charge in HBM4 Development, Paving the Way for Next-Gen AI Accelerators

4 Sources

Share

SK hynix announces completion of HBM4 development and readiness for mass production, promising significant improvements in bandwidth and efficiency for AI applications.

SK hynix Achieves HBM4 Milestone

SK hynix, the South Korean memory giant, has announced a significant breakthrough in the development of High Bandwidth Memory (HBM) technology. The company has successfully completed the development of HBM4 and is now prepared for high-volume manufacturing of memory stacks

1

2

. This achievement marks a crucial step forward in meeting the growing demands of AI accelerators and data centers.

Source: Tom's Hardware

Source: Tom's Hardware

Technical Advancements in HBM4

The new HBM4 memory stacks from SK hynix boast impressive specifications that go beyond the JEDEC standard:

  1. A 2,048-bit I/O interface, doubling the width of the HBM interface for the first time since 2015

    1

    .
  2. A data transfer rate of 10 GT/s, which is 25% higher than the official JEDEC standard of 8 GT/s

    1

    4

    .
  3. Improved power efficiency by more than 40% compared to previous generations

    4

    .

These advancements are made possible by SK hynix's use of their 1b-nm (5th Generation 10nm-class) process technology for DRAM dies and the continued implementation of their proven Advanced Mass Reflow Molded Underfill (MR-MUF) method

1

4

.

Source: The Register

Source: The Register

Impact on AI and Data Center Performance

The development of HBM4 is expected to have a significant impact on AI and data center performance:

  1. SK hynix projects up to a 69% improvement in AI service performance when HBM4 is applied

    4

    .
  2. The increased bandwidth and improved power efficiency are poised to address data bottlenecks and reduce data center power costs

    4

    .
  3. Major AI chipmakers like Nvidia and AMD are planning to incorporate HBM4 into their next-generation datacenter GPUs, with Nvidia's Rubin GPUs aiming for 13 TB/s of aggregate bandwidth and AMD's MI400-series targeting up to 20 TB/s

    2

    .

Market Implications and Competition

SK hynix's announcement has already had a positive impact on its market position:

  1. The company's share price rallied by 7% following the news

    2

    3

    .
  2. Analysts project that SK hynix could maintain a market share in the low 60% range for HBM in 2026, supported by early HBM4 supply to key customers

    3

    .
  3. SK hynix is currently the main HBM supplier to Nvidia, although Samsung Electronics and Micron are also in the race to bring HBM4 to market

    3

    .
Source: Reuters

Source: Reuters

Future Outlook and Production Timeline

While SK hynix has not disclosed specific details about the number of DRAM layers or the capacity of its HBM4 devices, industry speculation suggests they may be 12-Hi 36 GB devices

1

. The company has stated that it has established a production system for customers and is ready for mass production

3

4

.

As the AI industry continues to evolve rapidly, the availability of HBM4 from SK hynix and its competitors will play a crucial role in enabling the next generation of high-performance AI accelerators and data center GPUs.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo