3 Sources
3 Sources
[1]
Nvidia Partnership Fuels SK Hynix's Next Big AI Storage Bet - NVIDIA (NASDAQ:NVDA)
SK Hynix is deepening its strategic relationship with Nvidia Corp. (NASDAQ:NVDA) as rapidly growing artificial intelligence workloads strain the limits of existing memory and storage technologies. Building on its role as a key supplier of high-bandwidth memory (HBM) to Nvidia, the South Korean chipmaker is now extending the collaboration into next-generation solid-state drives designed to accelerate data processing for large-scale AI systems. The companies are working together to develop advanced SSDs tailored for AI workloads, marking SK Hynix's push to apply its customer-centric design approach to NAND flash storage. Also Read: Nvidia Effect: Key Suppliers Have Already Sold Out AI Memory Chips For 2025 The move reflects a broader industry shift as AI computing expands from training to inference, where storage speed, capacity, and energy efficiency become increasingly critical. NAND And High-Bandwidth Flash Take Center Stage Alongside Nvidia, SK Hynix is also collaborating with SanDisk to help establish standards for high-bandwidth flash (HBF), a layered NAND architecture aimed at meeting the intensifying demands of AI applications, according to a report by Chosun Biz on Tuesday. By combining higher speeds, greater capacity, and improved power efficiency, SK Hynix aims to overcome the limitations of current AI memory solutions. The company has divided its NAND development strategy into three core areas: boosting SSD performance through its partnership with Nvidia, advancing HBF technology with SanDisk, and developing large-capacity storage that balances performance with cost efficiency. These initiatives are focused on reducing data bottlenecks, increasing processing speeds, and supporting AI inference at scale, positioning SK Hynix as a growing force in the emerging AI storage market. HBM4 Supplies Strengthen Nvidia Ties Separately, Chosun Daily reported on Monday that SK Hynix has begun supplying its sixth-generation high-bandwidth memory, HBM4, to Nvidia. The company has delivered between 20,000 and 30,000 final sample units that reportedly meet all of Nvidia's specifications. The memory is now integrated into Nvidia's next-generation Rubin GPU for final customer testing. Once Rubin moves into mass production, SK Hynix is expected to significantly ramp up HBM4 shipments. The company has already finalized overall supply volumes and pricing terms with Nvidia, further solidifying the partnership. NVDA Price Action: Nvidia shares were up 0.71% at $177.55 at the time of publication on Tuesday, according to Benzinga Pro data. Read Next: Meet The Engineer Powering Google's $90 Billion AI Infrastructure Push Image via Shutterstock NVDANVIDIA Corp$177.450.66%OverviewMarket News and Data brought to you by Benzinga APIs
[2]
After Gobbling Up DRAM, NVIDIA & SK hynix Plan to Introduce an "AI SSD" With 10× Higher Performance, Ringing Alarms Over NAND Supply
It seems that the AI industry is now focusing on creating an inference-optimized AI SSD through NAND chips, as SK hynix plans to introduce a solution by 2027. NVIDIA & SK hynix Are Co-Developing an "AI SSD" That Could Bring Massive Benefits to Inference Workloads As traditional AI workloads shift from training to inference, there's a need to switch up the tech stack to ensure a low-latency, high-throughput environment. One of the reasons for this is how NVIDIA has decided to integrate general-purpose GDDR7 memory into the Rubin CPX GPU for prefill. Now, according to a report by Chosun Biz, a similar approach is expected with NAND chips as well. It is claimed that NVIDIA and SK hynix are developing a new SSD solution named "Storage Next" as an internal project, and it could revolutionize the NAND segment. It is claimed that SK hynix plans to present a prototype by the end of next year, and that the AI SSD could scale up to a massive 100 million IOPS, which is significantly larger compared to what traditional enterprise SSDs feature. When examining the need for such a solution, it becomes clear that the current AI workload structure requires continuous access to massive model parameters, which cannot be accommodated by HBM or general-purpose DRAM products. The AI SSD would eventually allow a pseudo-memory layer, optimized for AI workloads. The Korean giant is reportedly collaborating with NVIDIA on the Storage Next project, with the primary goal of enhancing throughput and energy efficiency through advanced NAND and controller architectures. While the goal certainly seems optimistic, it is also important to note that the NAND supply lines are already under massive pressure following demand for storage from CSPs and AI giants, and given that the AI SSD solution being discussed actually becomes mainstream, well, you could expect a DRAM-like situation with NAND Flash chips as well. It seems that facilitating AI workloads comes with a high cost, which disrupts existing supply chains, giving neither consumers nor suppliers time to react to the changing supply-demand situation. DRAM contract pricing is worsening with each day, and by the looks of it, NAND might be next. Follow Wccftech on Google to get more of our news coverage in your feeds.
[3]
SK hynix, Nvidia Jointly Developing SDDs For AI Inference: Report
A proof of concept for the new SSDs with promised performance of 100 million IOPS for AI inference workloads on Nvidia Rubin CPX GPUs is under development, with a prototype slated to be available in the second half of 2026. Korean memory and flash storage technology developer SK hynix and AI-focused GPU leader Nvidia are working on a joint project to develop next-generation SSDs focused on AI inference. Korea-based online publication ChosunBiz Tuesday reported that SK hynix Vice President Kim Cheon-seong said at the 2025 Artificial Intelligence Semiconductor Future Technology Conference, Korea's largest AI semiconductor event, that his company is developing a new SSD he said will offer 10 times the performance of existing SSDs. That performance is slated to reach up to 100 million IOPS by 2027, ChosunBiz reported. According to tests in 2025 by Tom's Hardware, the fastest current SSD on the market, the NN4ME 2-TB model from Japan-based Nextorage, had a random performance of 50,915 IOPS. [Related: The 2025 Storage 100] The planned SSDs are apparently being developed with the "Storage Next" name for Nvidia and "AI-NP" or AI NAND Performance for SK hynix. A proof of concept of the new technology is currently being developed, with a prototype slated to be available by the end of 2026, ChosunBiz reported. Neither SK hynix nor Nvidia responded to a CRN request for further information by publication time. SSD performance is a critical issue when it comes to AI inference. Data storage media can be a bottleneck during AI inference, with low performance throttling the amount of data that can be fed to GPUs and leaving them on idle for too long waiting for data. For Nvidia, that performance has become a priority. Nvidia in September unveiled its Rubin CPX AI GPU. The Rubin CPX, slated to debut in the second half of 2026, features 128 GB of GDDR7 memory and is targeted at high-value inference workloads with the ability to handle million-token coding and generative video applications.
Share
Share
Copy Link
Nvidia and SK Hynix are jointly developing an AI SSD called Storage Next, promising 100 million IOPS by 2027—ten times faster than current enterprise drives. The collaboration aims to eliminate data storage bottlenecks that throttle AI inference workloads, as the industry shifts focus from training to inference where storage speed becomes critical.
Nvidia and SK Hynix are collaborating on an ambitious AI SSD project that could fundamentally reshape how artificial intelligence systems handle data during inference operations. The South Korean chipmaker announced at the 2025 Artificial Intelligence Semiconductor Future Technology Conference that it's developing advanced solid-state drives capable of reaching 100 million IOPS by 2027, representing a tenfold performance increase over current enterprise storage solutions
2
3
. SK Hynix Vice President Kim Cheon-seong revealed the initiative, internally dubbed "Storage Next" for Nvidia and "AI-NP" (AI NAND Performance) for SK Hynix, with a prototype expected by the end of 20263
. This development extends the companies' existing partnership beyond high-bandwidth memory into NAND flash storage, addressing a critical infrastructure gap as AI computing evolves.
Source: Wccftech
The Storage Next project directly tackles one of the most pressing challenges facing AI inference: data storage bottlenecks that leave powerful GPUs idle while waiting for information. Current AI workload structures require continuous access to massive model parameters that cannot be accommodated by HBM or general-purpose DRAM products alone
2
. The AI SSD would create a pseudo-memory layer specifically optimized for AI inference workloads, enabling low-latency, high-throughput environments essential for applications like million-token coding and generative video3
. Storage media performance has become a critical bottleneck during AI inference, with low-performing drives throttling the amount of data fed to GPUs and significantly impacting overall system efficiency. By comparison, the fastest current SSD on the market, Nextorage's NN4ME 2-TB model, achieves approximately 50,915 IOPS—making the 100 million IOPS target roughly 2,000 times faster3
.
Source: CRN
SK Hynix has structured its NAND development strategy into three core areas to address the intensifying demands of AI applications. Beyond the Storage Next collaboration with Nvidia, the company is working with SanDisk to establish standards for high-bandwidth flash (HBF), a layered NAND architecture designed to combine higher speeds, greater capacity, and improved power efficiency
1
. The third pillar focuses on developing large-capacity storage that balances performance with cost efficiency. These initiatives aim to reduce data processing bottlenecks, increase speeds, and support AI inference at scale, positioning SK Hynix as a growing force in the emerging AI storage market1
. The company's customer-centric design approach, previously successful in high-bandwidth memory, is now being applied to NAND flash storage as the industry shifts from training to inference, where storage speed, capacity, and energy efficiency become increasingly critical1
.Related Stories
The storage collaboration builds on an already robust relationship between the two companies. SK Hynix has begun supplying its sixth-generation high-bandwidth memory, HBM4, to Nvidia, delivering between 20,000 and 30,000 final sample units that meet all specifications
1
. This memory is now integrated into the Nvidia Rubin GPU for final customer testing, with mass production expected to significantly ramp up HBM4 shipments1
. The Rubin CPX, unveiled in September and slated to debut in the second half of 2026, features 128 GB of GDDR7 memory and targets high-value inference workloads3
. The AI SSD development aligns perfectly with this timeline, as the proof of concept is currently under development with a prototype expected by late 20263
. SK Hynix has already finalized overall supply volumes and pricing terms with Nvidia for HBM4, further solidifying the partnership across multiple product lines1
.While the technological promise appears substantial, industry observers are raising concerns about potential supply chain disruptions. The NAND supply chain is already experiencing massive pressure from demand by cloud service providers and AI giants, and widespread adoption of AI memory solutions optimized for inference could create a situation similar to the current DRAM shortage
2
. DRAM contract pricing has worsened steadily, and experts warn that NAND might follow the same trajectory. The challenge lies in how facilitating AI workloads disrupts existing supply chains, giving neither consumers nor suppliers adequate time to react to rapidly changing supply-demand dynamics2
. As AI infrastructure requirements continue to expand, the industry must balance innovation with supply stability to prevent bottlenecks that could slow AI deployment across enterprises and research institutions.Summarized by
Navi
29 Oct 2025•Business and Economy

12 Sept 2025•Technology

04 Nov 2024•Technology

1
Technology

2
Technology

3
Technology
