2 Sources
2 Sources
[1]
Marvell develops custom HBM solution that offers higher performance in a smaller physical space
Marvell has announced custom high-bandwidth memory (CHBM) solution for its custom XPUs designed for AI applications at its Analyst Day 2024. Developed in partnership with leading memory makers, CHBM promises to optimize performance, power, memory capacity, die size, and cost for specific XPU designs. CHBM will be compatible with Marvell's custom XPUs and will not be a part of a JEDEC-defined HBM standard, at least initially. Marvell's custom HBM solution allows tailoring interfaces and stacks for a particular application, though the company has not disclosed any details. One of Marvell's goals is to reduce the real estate that industry-standard HBM interfaces occupy inside processors. Freeing up the real estate available to compute and features. The company asserts that with its proprietary die-to-die I/O, it will not only be able to pack up to 25% more logic into its custom XPUs, but also potentially install up to 33% more CHBM memory packages next to compute chiplets to increase the amount of DRAM available to the processor. In addition, the company expects to cut memory interface power consumption by up to 70%. Because Marvell's CHBM does not rely on a JEDEC-specified standard, on the hardware side of things it will require a new controller and customizable physical interface, new die-to-die interfaces, and overhauled HBM base dies. The new Marvell die-to-die HBM interface will have a bandwidth of 20 Tbps/mm (2.5 TB/s per mm), which is a significant increase over 5 Tbps/mm (625 GB/s per mm) that HBM offers today, based on a slide from the company's Analyst Day published by ServeTheHome. Over time Marvell envisions bufferless memory with a 50 Tbps/mm (6.25 TB/s per mm). Marvell does not specify how wide its CHBM interface will be. Marvell does not disclose many details about its custom HBM solution except saying that it 'enhances XPUs by serializing and speeding up the I/O interfaces between its internal AI compute accelerator silicon dies and the HBM base dies,' which somewhat implies on a narrower interface width compared to industry-standard HBM3E or HBM4 solutions. Yet, it looks like cHBM solutions will be customizable. "Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered," said Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell. "We are very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era." Working with Micron, Samsung, and SK hynix is crucial for successful implementation of Marvell's CHBM as it sets the stage for relatively widespread availability of custom high bandwidth memory.
[2]
Marvell unveils its new custom HBM compute architecture to optimize cloud AI accelerators
Marvell has just unveiled its new custom HBM compute architecture, enabling XPUs to have even higher levels of compute performance and memory density. The new HBM compute architecture is available on all of its custom silicon partners, with Marvell collaborating with its cloud customers and leading HBM manufacturers SK hynix, Samsung, and Micron to develop custom HBM solutions for next-generation XPUs. Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell, said: "The leading cloud data center operators have scaled with custom infrastructure. Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered. We're very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era". Raj Narasimhan, senior vice president and general manager of Micron's Compute and Networking Business Unit, added: "Increased memory capacity and bandwidth will help cloud operators efficiently scale their infrastructure for the AI era. Strategic collaborations focused on power efficiency, such as the one we have with Marvell, will build on Micron's industry-leading HBM power specs, and provide hyperscalers with a robust platform to deliver the capabilities and optimal performance required to scale AI". Harry Yoon, corporate executive vice president of Samsung Electronics and head of Americas products and solutions planning, said: "Optimizing HBM for specific XPUs and software environments will greatly improve the performance of cloud operators' infrastructure and ensure efficient power use. The advancement of AI depends on such focused efforts. We look forward to collaborating with Marvell, a leader in custom compute silicon innovation". Sunny Kang, VP of DRAM Technology, SK hynix America, said: "By collaborating with Marvell, we can help our customers produce a more optimized solution for their workloads and infrastructure. As one of the leading pioneers of HBM, we look forward to shaping this next evolutionary stage for the technology". Patrick Moorhead, CEO and Founder of Moor Insights & Strategy, added: "Custom XPUs deliver superior performance and performance per watt compared to merchant, general-purpose solutions for specific, cloud-unique workloads. Marvell, already a player in custom compute silicon, is already delivering tailored solutions to leading cloud companies. Their latest custom compute HBM architecture platform provides an additional lever to enhance the TCO for custom silicon. Through strategic collaboration with leading memory makers, Marvell is poised to empower cloud operators in scaling their XPUs and accelerated infrastructure, thereby paving the way for them to enable the future of AI".
Share
Share
Copy Link
Marvell has announced a custom high-bandwidth memory (CHBM) solution for AI applications, developed in partnership with leading memory manufacturers. This innovation aims to optimize performance, power consumption, and memory capacity for specific XPU designs.
Marvell, a leading technology company, has unveiled its new custom high-bandwidth memory (CHBM) solution at its Analyst Day 2024. This innovative technology, developed in collaboration with major memory manufacturers, is designed to enhance the performance of AI accelerators and optimize cloud infrastructure
1
.The CHBM solution offers several significant improvements over standard high-bandwidth memory:
1
.Marvell's CHBM initiative involves partnerships with leading memory manufacturers, including Micron, Samsung, and SK hynix. This collaboration is crucial for ensuring the widespread availability and adoption of custom high-bandwidth memory solutions
2
.Related Stories
Several industry leaders have expressed their support and excitement for Marvell's CHBM solution:
2
.2
.2
.Marvell's vision extends beyond the current CHBM solution. The company is already looking ahead to bufferless memory with a bandwidth of 50 Tbps/mm, further pushing the boundaries of memory performance
1
.As AI continues to drive technological advancements, innovations like Marvell's CHBM are poised to play a crucial role in shaping the future of cloud computing and AI acceleration. The collaboration between Marvell and leading memory manufacturers sets the stage for a new era of customized, high-performance memory solutions tailored to the specific needs of AI workloads.
Summarized by
Navi
[1]
19 Mar 2025•Technology
04 Nov 2024•Technology
20 May 2025•Technology
1
Business and Economy
2
Business and Economy
3
Technology