The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 2 Oct, 4:04 PM UTC
2 Sources
[1]
Rambus Outlines HBM4 Memory Controller: Up To 10 Gb/s Speeds, 2.56 TB/s Bandwidth & 64 GB Capacities Per Stack
Rambus has detailed its next-gen HBM4 Memory Controller which will allow significant uplifts over existing HBM3 and HBM3E solutions. HBM4 Will Begin the next chapter of AI & Data Center Evolution, Delivering Faster Memory Speeds & Higher Capacities Per Stack As JEDEC moves towards the finalization of the HBM4 memory spec, we have our first details of what the next-generation solution would offer. Aimed primarily at the AI and Data Center market, the HBM4 memory solution is going to continue to expand the capabilities of existing HBM DRAM design. Starting with the details, Rambus announced its HBM4 memory controller which is going to offer over 6.4 Gb/s speeds per pin which should be faster than the first generation of HBM3 solution while offering more bandwidth than HBM3E solutions using the same 16-Hi stack and 64 GB max capacity design. The starting bandwidth for HBM4 is rated at 1638 GB/s which is 33% higher than HBM3E and 2x higher than HBM3. Currently, HBM3E solutions operate at up to 9.6 Gb/s speeds with up to 1.229 TB/s of bandwidth per stack. With HBM4, the memory solution is going to offer up to 10 Gb/s speeds and up to 2.56 GB/s of bandwidth per HBM interface. That will mark more than a 2x increase over HBM3E but the full capabilities of HBM4 memory won't be seen for a while and only become accessible once yields get better. Other features of the HBM4 memory solution include ECC, RMW (Read-Modify-Write), Error Scrubbing, etc. As of right now, SK Hynix has reportedly begun mass production of its 12-layer HBM3E memory with up to 36 GB capacities and 9.6 Gbps speeds while its next-gen HBM4 memory is expected to tape out this month. Meanwhile, Samsung is expected to go into mass production for its HBM4 memory by the end of 2025 with tape out expected this quarter. As of right now, NVIDIA's Rubin GPUs which are expected to arrive in 2026 are going to be the first AI platform to feature HBM4 memory support while Instinct MI400 is also expected to utilize the next-gen design however AMD hasn't confirmed that yet.
[2]
Rambus details HBM4 memory controller: up to 10Gb/s, 2.56TB/sec bandwidth, 64GB per stack
Rambus has provided more details on its upcoming HBM4 memory controller, which offers some huge upgrades over current HBM3 and HBM3 memory controllers. JEDEC is still finalizing the HBM4 memory specifications, with Rambus teasing its next-gen HBM4 memory controller that will be prepared for next-gen AI and data center markets, continuing to expand the capabilities of existing HBM DRAM designs. Rambus' new HBM4 controller will pump over 6.4Gb/s speeds per pin, which is faster than the first-gen HBM3 and has more bandwidth than faster HBM3E memory using the same 16-Hi stack and 64GB max capacity design. HBM4 starting bandwidth is at 1638GB/sec (1.63TB/sec) which is 33% faster than HBM3E and 2x faster than HBM3. HBM3E memory operates at 9.6Gb/s speeds with up to 1.229TB/sec of memory bandwidth per stack, while HBM4 memory will offer up to 10Gb/s speeds and a much bigger 2.56TB/sec of bandwidth per HBM interface. This is a 2x increase over the just-launched HBM3E, but the full capabilities of HBM4 memory won't be realized for a while yet (NVIDIA's next-gen Rubin R100 will use HBM4 in 2026). Rambus talked about some of the other features of HBM4, which include ECC, RMW (Read-Modify-Write), Error Scrubbing, and more. South Korean memory giant SK hynix is the only company mass-producing new 12-layer HBM3E memory with up to 36GB capacities and 9.6Gbps speeds, but next-gen HBM4 memory from SK hynix is expected to tape out next month, while Samsung is gearing into HBM4 mass production before the end of 2025, with tape out expected in Q4 2024. We're expecting the next-gen NVIDIA Rubin R100 AI GPUs to use a 4x reticle design (compared to Blackwell with 3.3x reticle design) and made on TSMC's bleeding-edge CoWoS-L packaging technology on the new N3 process node. TSMC recently talked about up to 5.5x reticle size chips arriving in 2026, featuring a 100 x 100mm substrate that would handle 12 HBM sites, versus 8 HBM sites on current-gen 80 x 80mm packages. TSMC will be shifting to a new SoIC design that will allow larger than 8x reticle size on a bigger 120 x 120mm package configuration, but as Wccftech points out, these are still being planned, so we can probably expect somewhere around the 4x reticle size for Rubin R100 AI GPUs.
Share
Share
Copy Link
Rambus has announced details of its HBM4 memory controller, promising significant improvements in speed, bandwidth, and capacity. This new technology could revolutionize high-performance computing and AI applications.
Rambus, a pioneer in high-speed interface technology, has unveiled its latest innovation in the form of HBM4 (High Bandwidth Memory) 1. This next-generation memory technology promises to deliver unprecedented performance and capacity, potentially revolutionizing the landscape of high-performance computing and artificial intelligence applications.
The HBM4 memory controller boasts impressive specifications, with speeds reaching up to 10 Gbps per pin. This represents a significant leap from its predecessor, HBM3, which operates at 6.4 Gbps 2. The increased speed translates to a remarkable bandwidth of 2.56 TB/s per stack, setting a new standard for data transfer rates in memory technologies.
In addition to speed enhancements, HBM4 also addresses the growing demand for larger memory capacities. Each HBM4 stack can accommodate up to 64 GB of memory 1. This substantial increase in capacity per stack allows for more efficient memory allocation and management in complex computing tasks.
The advancements brought by HBM4 are expected to have far-reaching implications across various sectors. High-performance computing, artificial intelligence, machine learning, and data analytics are among the fields that stand to benefit significantly from this technology 2. The increased bandwidth and capacity could enable more complex simulations, faster data processing, and improved AI model training.
Rambus has provided detailed technical specifications for the HBM4 memory controller. It features a 1024-bit wide interface and supports up to 16 independent channels 1. The controller is designed to work with both HBM3 and HBM4 DRAMs, ensuring backward compatibility and flexibility in implementation.
The introduction of HBM4 is likely to spark a new wave of innovation in the semiconductor industry. As companies strive to incorporate this technology into their products, we may see a significant boost in the capabilities of next-generation GPUs, AI accelerators, and high-performance computing systems 2.
While Rambus has announced the specifications for HBM4, the timeline for commercial availability remains unclear. The technology is still in its early stages, and it may take some time before we see HBM4-equipped devices in the market 1. However, the announcement has set the stage for future developments in high-bandwidth memory technologies.
SK hynix has begun sampling its groundbreaking 12-layer HBM4 memory, offering unprecedented capacity and bandwidth for AI acceleration. This development marks a significant leap in memory technology for AI applications.
5 Sources
5 Sources
SK Hynix and Micron are gearing up for the production of next-generation High Bandwidth Memory (HBM) technologies, with SK Hynix focusing on HBM3E for 2025 and Micron targeting HBM4 for 2026, driven by increasing demand in AI GPU components.
3 Sources
3 Sources
Micron Technology has introduced its latest High Bandwidth Memory (HBM) solution, the HBM3E, featuring unprecedented capacity and bandwidth. This advancement promises significant improvements for AI and high-performance computing applications.
3 Sources
3 Sources
SK Hynix strengthens its position in the AI chip market by advancing HBM4 production and introducing new HBM3E technology, responding to Nvidia's request for faster delivery amid growing competition with Samsung.
12 Sources
12 Sources
SK Hynix has started mass production of its cutting-edge 12-layer HBM3E memory modules, offering 36GB capacity per module and speeds up to 9.6 Gbps. This breakthrough is set to revolutionize high-performance computing and AI applications.
9 Sources
9 Sources