The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Mon, 28 Apr, 8:01 AM UTC
2 Sources
[1]
SK hynix showcases world's first HBM4: 16-Hi stacks, 2TB/sec memory bandwidth, TSMC logic die
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you. SK hynix showed off its next-gen HBM4 memory at TSMC's recent North American Technology Symposium, with up to 16-Hi stacks and 2TB/sec memory bandwidth per stack, ready for NVIDIA's next-gen Vera Rubin AI hardware. SK hynix showed off both 12-Hi and 16-Hi stacks of HBM4 memory, which feature a capacity of up to 48GB, up to 2TB/sec memory bandwidth, and I/O speeds rated at 8Gbps, with the South Korean memory leader announcing mass production for 2H 2025, and into AI GPUs by the end of this year, flooding the market with HBM4-powered AI GPUs like NVIDIA's next-gen Vera Rubin in 2026. We will see SK hynix's world-leading HBM4 memory chips inside of NVIDIA's upcoming GB300 "Blackwell Ultra" AI GPUs, with the company planning to shift fully into the arms of HBM4 memory starting with Vera Rubin later this year. SK hynix also pointed out that they've managed the high number of layers through using Advanced MR-MUF and TSV technologies. SK hynix also showed off its family of server memory modules, with RDIMM and MRDIMM high-performance server modules now being build based on the latest 1c DRAM standard, which pushes module speeds up to an impressive 12,500MB/sec. SK hynix said: "Notably, SK hynix exhibited a range of modules designed to enhance AI and data center performance while reducing power consumption. These included the MRDIMM lineup with a speed of 12.8 gigabits per second (Gbps) and capacities of 64 GB, 96 GB, and 256 GB; RDIMM modules with a speed of 8 Gbps in 64 GB and 96 GB capacities; and a 256 GB 3DS RDIMM".
[2]
SK Hynix Showcases World's First HBM4 Technology To The Public; Featuring 16-Hi Stacks, 2.0 TB/s Bandwidth & TSMC Logic Die
SK Hynix decided to showcase its HBM4 implementation to the public at TSMC's NA Technology Symposium, alongside several other memory products. Well, when it comes to HBM manufacturers in the market, it seems like SK hynix is way ahead of all others, especially with its HBM4 technology. It is claimed that the firm has already prepared a commercial version of the process, whilst competitors like Micron and Samsung are still in the sampling stages, which shows that, at least for now, SK Hynix is winning the race. At TSMC's North America Technology Symposium, the firm showcased what it calls "AI memory" leadership by unveiling several new products discussed ahead. First and foremost, SK hynix gave the public a preview of its HBM4 process, giving a slight rundown on specifications as well. So, we are looking at HBM4, which has a capacity of up to 48 GB, 2.0 TB/s bandwidth, and an I/O speed rated at 8.0 Gbps. SK Hynix has announced that they are looking for mass production by H2 2025, which means that the process could see integration into products as early as the end of this year, which is amazing. It is important to note that the Korean giant is the only firm that has showcased HBM4 to the public. Alongside HBM4, we saw SK hynix's implementation of the 16-layer HBM3E, which is also the first of its kind, featuring 1.2 TB/s bandwidth and much more. This particular standard is said to be integrated with NVIDIA's GB300 "Blackwell Ultra" AI clusters, as NVIDIA plans to transition to HBM4 with Vera Rubin. Interestingly, SK Hynix claims that they have managed to connect so many layers through Advanced MR-MUF and TSV, and we are probably looking at the pioneer of the mentioned technologies. Apart from HBM, SK hynix also showcased its lineup of server memory modules, notably RDIMMs and MRDIMMs products. High-performance server modules are now being built based on the newer 1c DRAM standard, and this has resulted in the modules reaching speeds of up to 12,500 MB/s, which is simply astonishing. Notably, SK hynix exhibited a range of modules designed to enhance AI and data center performance while reducing power consumption. These included the MRDIMM lineup with a speed of 12.8 gigabits per second (Gbps) and capacities of 64 GB, 96 GB, and 256 GB; RDIMM modules with a speed of 8 Gbps in 64 GB and 96 GB capacities; and a 256 GB 3DS RDIMM. - SK hynix There's no doubt that SK hynix currently has an edge over the HBM and DRAM markets, beating long-standing players like Samsung mainly by driving innovation and partnerships with the likes of NVIDIA.
Share
Share
Copy Link
SK Hynix showcases groundbreaking HBM4 memory technology at TSMC's North American Technology Symposium, featuring 16-Hi stacks and 2TB/sec bandwidth, positioning itself as a leader in AI memory solutions.
SK Hynix, a South Korean memory leader, has unveiled the world's first HBM4 (High Bandwidth Memory) technology at TSMC's North American Technology Symposium, showcasing significant advancements in memory solutions for AI and data centers 12. This breakthrough positions SK Hynix at the forefront of AI memory innovation, potentially reshaping the landscape of high-performance computing.
The new HBM4 technology boasts impressive specifications:
SK Hynix demonstrated both 12-Hi and 16-Hi stacks, with the latter representing the pinnacle of current memory stacking technology 1. The company has achieved this high number of layers through the use of Advanced MR-MUF and TSV (Through-Silicon Via) technologies 2.
SK Hynix has announced plans for mass production of HBM4 in the second half of 2025, with integration into AI GPUs expected by the end of this year 1. This aggressive timeline suggests that HBM4-powered AI GPUs could flood the market by 2026, potentially revolutionizing AI computing capabilities.
The new HBM4 memory chips are slated to be incorporated into NVIDIA's upcoming GB300 "Blackwell Ultra" AI GPUs 12. NVIDIA is planning a full transition to HBM4 memory with its next-generation Vera Rubin architecture, highlighting the critical role of SK Hynix's technology in future AI hardware developments.
Alongside HBM4, SK Hynix showcased other memory products at the symposium:
HBM3E: A 16-layer implementation with 1.2TB/s bandwidth, targeted for NVIDIA's GB300 "Blackwell Ultra" AI clusters 2.
Server Memory Modules: New RDIMM and MRDIMM products based on the latest 1c DRAM standard, achieving speeds up to 12,500MB/sec 12.
AI and Data Center Modules: A range of modules designed to enhance performance while reducing power consumption, including:
SK Hynix's advancements in HBM4 and other memory technologies demonstrate its growing dominance in the memory market. The company appears to be outpacing competitors like Micron and Samsung, particularly in HBM4 development 2. This leadership position could have significant implications for the AI and high-performance computing industries, potentially influencing future hardware designs and capabilities.
SK hynix has begun sampling its groundbreaking 12-layer HBM4 memory, offering unprecedented capacity and bandwidth for AI acceleration. This development marks a significant leap in memory technology for AI applications.
5 Sources
5 Sources
SK Hynix and Micron are gearing up for the production of next-generation High Bandwidth Memory (HBM) technologies, with SK Hynix focusing on HBM3E for 2025 and Micron targeting HBM4 for 2026, driven by increasing demand in AI GPU components.
3 Sources
3 Sources
Rambus has announced details of its HBM4 memory controller, promising significant improvements in speed, bandwidth, and capacity. This new technology could revolutionize high-performance computing and AI applications.
2 Sources
2 Sources
SK Hynix strengthens its position in the AI chip market by advancing HBM4 production and introducing new HBM3E technology, responding to Nvidia's request for faster delivery amid growing competition with Samsung.
12 Sources
12 Sources
SK Hynix has started mass production of its cutting-edge 12-layer HBM3E memory modules, offering 36GB capacity per module and speeds up to 9.6 Gbps. This breakthrough is set to revolutionize high-performance computing and AI applications.
9 Sources
9 Sources