Nvidia and SK Hynix develop AI SSD targeting 100 million IOPS to solve inference bottlenecks

3 Sources

Share

Nvidia and SK Hynix are jointly developing an AI SSD called Storage Next, promising 100 million IOPS by 2027—ten times faster than current enterprise drives. The collaboration aims to eliminate data storage bottlenecks that throttle AI inference workloads, as the industry shifts focus from training to inference where storage speed becomes critical.

Nvidia SK Hynix Partnership Targets Revolutionary AI SSD Performance

Nvidia and SK Hynix are collaborating on an ambitious AI SSD project that could fundamentally reshape how artificial intelligence systems handle data during inference operations. The South Korean chipmaker announced at the 2025 Artificial Intelligence Semiconductor Future Technology Conference that it's developing advanced solid-state drives capable of reaching 100 million IOPS by 2027, representing a tenfold performance increase over current enterprise storage solutions

2

3

. SK Hynix Vice President Kim Cheon-seong revealed the initiative, internally dubbed "Storage Next" for Nvidia and "AI-NP" (AI NAND Performance) for SK Hynix, with a prototype expected by the end of 2026

3

. This development extends the companies' existing partnership beyond high-bandwidth memory into NAND flash storage, addressing a critical infrastructure gap as AI computing evolves.

Source: Wccftech

Source: Wccftech

Eliminating Data Storage Bottlenecks in AI Inference Workloads

The Storage Next project directly tackles one of the most pressing challenges facing AI inference: data storage bottlenecks that leave powerful GPUs idle while waiting for information. Current AI workload structures require continuous access to massive model parameters that cannot be accommodated by HBM or general-purpose DRAM products alone

2

. The AI SSD would create a pseudo-memory layer specifically optimized for AI inference workloads, enabling low-latency, high-throughput environments essential for applications like million-token coding and generative video

3

. Storage media performance has become a critical bottleneck during AI inference, with low-performing drives throttling the amount of data fed to GPUs and significantly impacting overall system efficiency. By comparison, the fastest current SSD on the market, Nextorage's NN4ME 2-TB model, achieves approximately 50,915 IOPS—making the 100 million IOPS target roughly 2,000 times faster

3

.

Source: CRN

Source: CRN

Comprehensive NAND Strategy Includes High-Bandwidth Flash Development

SK Hynix has structured its NAND development strategy into three core areas to address the intensifying demands of AI applications. Beyond the Storage Next collaboration with Nvidia, the company is working with SanDisk to establish standards for high-bandwidth flash (HBF), a layered NAND architecture designed to combine higher speeds, greater capacity, and improved power efficiency

1

. The third pillar focuses on developing large-capacity storage that balances performance with cost efficiency. These initiatives aim to reduce data processing bottlenecks, increase speeds, and support AI inference at scale, positioning SK Hynix as a growing force in the emerging AI storage market

1

. The company's customer-centric design approach, previously successful in high-bandwidth memory, is now being applied to NAND flash storage as the industry shifts from training to inference, where storage speed, capacity, and energy efficiency become increasingly critical

1

.

HBM4 Supplies and Nvidia Rubin GPU Integration Strengthen Ties

The storage collaboration builds on an already robust relationship between the two companies. SK Hynix has begun supplying its sixth-generation high-bandwidth memory, HBM4, to Nvidia, delivering between 20,000 and 30,000 final sample units that meet all specifications

1

. This memory is now integrated into the Nvidia Rubin GPU for final customer testing, with mass production expected to significantly ramp up HBM4 shipments

1

. The Rubin CPX, unveiled in September and slated to debut in the second half of 2026, features 128 GB of GDDR7 memory and targets high-value inference workloads

3

. The AI SSD development aligns perfectly with this timeline, as the proof of concept is currently under development with a prototype expected by late 2026

3

. SK Hynix has already finalized overall supply volumes and pricing terms with Nvidia for HBM4, further solidifying the partnership across multiple product lines

1

.

NAND Supply Chain Concerns Echo DRAM Market Pressures

While the technological promise appears substantial, industry observers are raising concerns about potential supply chain disruptions. The NAND supply chain is already experiencing massive pressure from demand by cloud service providers and AI giants, and widespread adoption of AI memory solutions optimized for inference could create a situation similar to the current DRAM shortage

2

. DRAM contract pricing has worsened steadily, and experts warn that NAND might follow the same trajectory. The challenge lies in how facilitating AI workloads disrupts existing supply chains, giving neither consumers nor suppliers adequate time to react to rapidly changing supply-demand dynamics

2

. As AI infrastructure requirements continue to expand, the industry must balance innovation with supply stability to prevent bottlenecks that could slow AI deployment across enterprises and research institutions.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo