SK hynix and SanDisk unite to standardize High Bandwidth Flash for AI inference servers

2 Sources

Share

SK hynix and SanDisk have launched a joint effort to standardize High Bandwidth Flash (HBF), a next-generation flash memory architecture designed to fill the performance gap between HBM and SSDs. The new memory standard targets AI inference workloads and could reach commercialization by late 2027 or early 2028.

News article

SK hynix SanDisk partnership targets new memory standard for AI infrastructure

SK hynix and SanDisk have kicked off a formal collaboration to establish High Bandwidth Flash (HBF) as an industry standard, addressing critical bottlenecks in AI inference servers. The two memory giants hosted an inaugural event at SanDisk's Milpitas, California headquarters to launch the HBF Spec Standardization Consortium under the Open Compute Project, the world's largest organization for sharing data center product designs

2

. This next-generation flash memory architecture aims to bridge the gap between HBM and SSDs, creating a critical middle layer for AI accelerators that demand both speed and capacity

1

.

High Bandwidth Flash fills critical performance void in AI inference

The importance of NAND flash has surged as AI services increasingly shift toward inference, which requires fast and frequent access to large volumes of stored data to deliver real-time results

2

. Current server-grade NAND chips can reach speeds of 28 GB/s per unit, but this falls short of what AI inference demands

1

. HBF is specifically positioned as a supporting layer between HBM DRAM, which delivers around 1.2 TB/s per stack with current-gen HBM3E, and traditional flash SSDs. Industry observers speculate HBF chips could target speeds of at least 10 GB/s each, potentially achieving combined speeds in the hundreds of GB/s

1

. This architecture could function similarly to an expanded on-SSD cache or as a fast block storage device comparable to Optane, though applications and operating systems may need optimization to use it efficiently.

Power-efficient storage solutions address datacenter energy challenges

Power efficiency stands as a core design principle for HBF standardization, reflecting urgent concerns about datacenter energy consumption. High-end SSDs like the Micron 9650 consume 25 W at full capacity, a figure that scales alarmingly when considering exabyte-scale deployments with tens of thousands of drives

1

. By securing both scalability and power efficiency required for AI inference, HBF aims to help data centers reduce total cost of ownership

2

. SK hynix Chief Development Officer Ahn Hyun emphasized that "through HBF standardization, we will establish a cooperative framework and create new value by presenting an optimized memory architecture for customers and partners in the AI era"

2

.

Timeline and market implications for memory solutions

Industry officials anticipate HBF could be commercialized in late 2027 or early 2028, while demand for comprehensive memory solutions that include HBM and HBF will fully expand around 2030

2

. The companies are establishing a dedicated team to work on HBF standardization, with SK hynix stating they will "establish HBF as an industry standard and build a foundation for the entire AI ecosystem to grow together"

2

. The standard specifically targets AI inference servers, where outputs that users produce need storage, and that storage need is projected to grow exponentially

1

. This positions HBF to enhance AI accelerators by providing the storage capacity and bandwidth balance that current memory architectures struggle to deliver efficiently.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo