8 Sources
8 Sources
[1]
Samsung Nears Nvidia's Approval for Key HBM4 AI Memory Chips
Samsung trails SK Hynix and Micron Technology Inc. at the forefront of AI memory, but investor hopes are rising that Samsung may be able to join its rivals in supplying components for Nvidia's upcoming flagship Rubin processors. Samsung Electronics Co. is getting close to securing certification from Nvidia Corp. for the latest version of its AI memory chip, called HBM4, making progress in narrowing the gap with rival SK Hynix Inc. The Suwon, South Korea-based company has entered the final qualification phase with Nvidia, after supplying its initial samples to the US chipmaker in September, according to people familiar with the matter. Nvidia uses vast quantities of high-bandwidth memory, or HBM, to enable its AI accelerators. Samsung is preparing for mass production of HBM4 in February, said the people, who asked not to be named as the information is private. The company will be ready to ship soon, though exact timing is not yet clear. Its shares gained as much as 3.2% in Seoul on Monday before paring gains, while SK Hynix's stock slid by about the same amount. A Samsung representative declined to comment. Samsung trails SK Hynix and Micron Technology Inc. at the forefront of AI memory, but all three companies have seen their shares surge dramatically in recent weeks as the AI rush has produced a shortage of memory for the broader electronics industry. Between them, the three leading memory chip manufacturers have gained roughly $900 billion in market value since the start of September. Investor hopes are now rising that Samsung may be able to join its rivals in supplying components for Nvidia's upcoming flagship Rubin processors. So far, Nvidia has leaned most heavily on SK Hynix for the most sophisticated memory chips it pairs with its top-of-the-line AI accelerators. The Korea Economic Daily reportedBloomberg Terminal earlier Samsung is slated to begin HBM4 shipments to Nvidia and Advanced Micro Devices Inc. next month. Both Samsung and SK Hynix are slated to hold their earnings calls on Thursday, and they are expected to discuss progress on their HBM4 chips.
[2]
HBM4 adoption shows a shift toward system-level AI performance priorities
Samsung regains its position with early HBM4 placement in Nvidia systems * Samsung HBM4 is already integrated into Nvidia's Rubin demonstration platforms * Production synchronization reduces scheduling risk for large AI accelerator deployments * Memory bandwidth is becoming a primary constraint for next-generation AI systems Samsung Electronics and Nvidia are reportedly working closely to integrate Samsung's next-generation HBM4 memory modules into Nvidia's Vera Rubin AI accelerators. Reports say the collaboration follows synchronized production timelines, with Samsung completing verification for both Nvidia and AMD and preparing for mass shipments in February 2026. These HBM4 modules are set for immediate use in Rubin performance demonstrations ahead of the official GTC 2026 unveiling. Technical integration and joint innovation Samsung's HBM4 operates at 11.7Gb/s, exceeding Nvidia's stated requirements and supporting the sustained memory bandwidth needed for advanced AI workloads. The modules incorporate a logic base die produced using Samsung's 4nm process, which gives it greater control over manufacturing and delivery schedules compared to suppliers that rely on external foundries. Nvidia has integrated the memory into Rubin with close attention to interface width and bandwidth efficiency, which allows the accelerators to support large-scale parallel computation. Beyond component compatibility, the partnership emphasizes system-level integration, as Samsung and Nvidia are coordinating memory supply with chip production, which allows HBM4 shipments to be adjusted in line with Rubin manufacturing schedules. This approach reduces timing uncertainty and contrasts with competing supply chains that depend on third-party fabrication and less flexible logistics. Within Rubin-based servers, HBM4 is paired with high-speed SSD storage to handle large datasets and limit data movement bottlenecks. This configuration reflects a broader focus on end-to-end performance, rather than optimizing individual components in isolation. Memory bandwidth, storage throughput, and accelerator design function as interdependent elements of the overall system. The collaboration also signals a shift in Samsung's position within the high-bandwidth memory market. HBM4 is now set for early adoption in Nvidia's Rubin systems, following earlier challenges in securing major AI customers. Reports indicate that Samsung's modules are first in line for Rubin deployments, marking a reversal from previous hesitations around its HBM offerings. The collaboration reflects growing attention on memory performance as a key enabler for next-generation AI tools and data-intensive applications. Demonstrations planned for Nvidia GTC 2026 in March are expected to pair Rubin accelerators with HBM4 memory in live system tests. The focus will remain on integrated performance rather than standalone specifications. Early customer shipments are expected from August. This timing suggests close alignment between memory production and accelerator rollout as AI infrastructure demand continues to rise. Via WCCF Tech Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[3]
SK hynix makes 'significant' progress in NVIDIA's extensive HBM4 tests, close to mass supply
TL;DR: SK hynix has made significant progress in NVIDIA's HBM4 qualification tests, delivering optimized 10Gbps memory chips nearing mass production for Rubin AI GPUs. Despite a later start than Samsung, SK hynix maintains NVIDIA's trust and aims to secure a dominant market share in next-generation HBM4 memory supply. SK hynix has reportedly had "significant progress" in NVIDIA's extensive HBM4 qualification tests, which will end up inside of Rubin AI GPUs coming soon. In a new report from South Korean media outlet Hankyung picked up by analyst @Jukan on X, we're hearing from industry sources that on January 30, SK hynix achieved "meaningful results" in NVIDIA's HBM4 System-in-Package (SiP) testing earlier this month. SK hynix started the Customer Sample (CS) certification process with NVIDIA in October 2025, where during that time defects were found in some circuits. SK hynix made modifications to the circuits and adjusted the process, delivering improved HBM4 memory chips to NVIDIA earlier this month. It's been confirmed that these optimized products are very close to being ready for mass production, with the new HBM4 memory chips are good to go at 10Gbps under general environments, they are hitting 9-10Gbps under NVIDIA's rigorous test conditions for temperature, humidity, and impact. SK hynix's new HBM4 memory chips are ready later than South Korean DRAM rival -- Samsung -- but SK hynix will continue to send NVIDIA its latest HBM4 prototype chips throughout the next few months according to NVIDIA's final specifications for Rubin. We should expect full-scale mass production shortly after, ready to better compete with Samsung in HBM4. An industry official said: "SK hynix still enjoys NVIDIA's trust, and I understand that NVIDIA has allocated a large volume of HBM4 supply to them. SK hynix likely met the requirements in various evaluation categories beyond just bandwidth". SK hynix held its Q4 2025 earnings call this week, with the company explaining: "For HBM4, just as with HBM3 (4th generation) and HBM3E (5th generation), we aim for an overwhelming market share".
[4]
Samsung HBM4 to debut alongside NVIDIA's Rubin AI platform at GTC 2026
Samsung Electronics' next-generation high-bandwidth memory HBM4 will debut alongside NVIDIA's AI accelerator Rubin at the GTC 2026 conference in March, following final quality tests passed with NVIDIA and AMD. An exclusive article from biz.sbs.co.kr states that Samsung has cleared the final quality evaluations for HBM4 from both NVIDIA and AMD. Mass production of HBM4 begins next month. Units mass-produced and shipped from Samsung in February will reach NVIDIA for use in demonstrating Rubin's performance at the March GTC event. HBM4 from Samsung operates at 11.7 gigabits per second, the highest specification in the industry. This exceeds the 10 gigabits per second required by NVIDIA and AMD. Last year, the memory passed verification without any redesign, even after customers requested performance enhancements. This outcome demonstrates the technological completeness of Samsung's HBM4 design. The semiconductor industry reported these developments on the 25th. Evaluations within the sector indicate that Samsung's memory technology has stabilized with this HBM4 shipment. Previous technological gaps with competitors, evident during the HBM3 and HBM3E phases, have been addressed in HBM4. Samsung now enters a recovery phase for its prior product leadership position. Full-scale supply of HBM4 in large volumes is projected for around June. HBM4 integrates directly into AI accelerators such as NVIDIA's Rubin, linking its availability to customers' schedules for final product mass production. Major customers currently produce next-generation chips through foundries. Consequently, Samsung adjusts HBM4 shipment volumes to align with these customers' actual mass production timelines and specified quantities.
[5]
Samsung should be first with HBM4 powering NVIDIA's new Vera Rubin AI chips, passed all tests
TL;DR: Samsung will unveil its next-generation HBM4 memory at NVIDIA GTC 2026, featuring speeds up to 11Gbps, surpassing JEDEC standards. Integrated into NVIDIA's Vera Rubin AI platform, Samsung's advanced HBM4 offers high bandwidth and energy efficiency, accelerating AI development and strengthening semiconductor innovation globally. Samsung will debut its next-gen HBM4 memory at NVIDIA GTC 2026 in March, reportedly passing all of NVIDIA's strict verification stages, and will arrive on NVIDIA's next-gen Vera Rubin AI platform. Samsung has spent the last couple of years struggling with its HBM memory division, leaving its South Korean rival -- SK hynix -- to enjoy providing NVIDIA with all of its HBM3 and HBM3E needs. Samsung completely overhauled its HBM and semiconductor division in the last few years, with the fruits of that labor now showing. NVIDIA will reportedly use its first allotments of HBM4 memory for Vera Rubin from Samsung, as Samsung's new HBM4 memory is the best of the HBM4 offerings from its rivals in SK hynix and US-based Micron. Samsung's new HBM4 memory is rated for above 11Gbps, much higher than JEDEC standards for HBM4, and was pushed and requested at those higher pin speeds from NVIDIA direct. Samsung and NVIDIA are working together on HBM4, with Samsung explaining on its press release that with incredibly high bandwidth and energy efficiency, Samsung's advanced HBM solutions are expected to help accelerate the development of future AI applications, and form a critical foundation for manufacturing infrastructure driven by these technologies. The company is using its 6th-generation 10nm-class DRAM and a 4nm logic base die, with Samsung's upcoming HBM4 processing speeds reaching up to 11Gbps, far exceeding the JEDEC standard for HBM4 at 8Gbps. Samsung will also continue to deliver next-generation memory solutions, including HBM, GDDR, and SOCAMM memory, as well as foundry services, driving innovation and scalability across the global AI value chain.
[6]
Nvidia Supply Deal In Crosshairs, Samsung Set To Start HBM4 Chip Production Next Month As Korean Giant Takes On SK Hynix: Report - Advanced Micro Devices (NASDAQ:AMD), NVIDIA (NASDAQ:NVDA)
Samsung Electronics Co. (OTC:SSNLF) is reportedly preparing to begin production of its next-generation high-bandwidth memory chips as it seeks to supply Nvidia Corp (NASDAQ:NVDA) and narrow the gap with SK Hynix. Samsung Pushes Into Next-Gen AI Memory Samsung plans to start manufacturing HBM4 as early as next month and is expected to supply it to Nvidia, reported Reuters, citing a person familiar with the matter. The move marks a key step in Samsung's efforts to regain momentum in the high-bandwidth memory market after production delays weighed on its earnings and share price last year. HBM chips are essential components for advanced AI accelerators, an area where demand has surged alongside the rapid expansion of generative AI. While the source declined to disclose shipment volumes or contract specifics, South Korea's Korea Economic Daily reported that Samsung has passed HBM4 qualification tests for both Nvidia and Advanced Micro Devices, Inc. (NASDAQ:AMD). The company is preparing to begin shipments to Nvidia next month, the report said. SK Hynix Defends Its Lead In HBM Samsung's main rival, SK Hynix, currently dominates the HBM market and has been the primary supplier of advanced memory chips for Nvidia's AI processors. The company said in October that it had completed supply negotiations with major customers for next year. SK Hynix is also expanding production capacity. An executive told the publication earlier this month that the company will begin deploying silicon wafers into its new M15X fabrication plant in Cheongju, South Korea, next month to produce HBM chips. However, it remains unclear whether HBM4 will be included in the initial output. Nvidia's Next-Gen AI Chips Raise Stakes The timing is critical as Nvidia prepares to launch its next-generation AI platform, Vera Rubin, later this year. Nvidia CEO Jensen Huang said earlier this month that the platform is already in full production and will be paired with HBM4 memory. Samsung, SK Hynix Earnings In Focus As Memory Chip Price Hikes Both Samsung and SK Hynix are scheduled to report fourth-quarter earnings on Thursday. Samsung has raised prices on its major memory chips by as much as 60% since September 2025. The company also acted swiftly this week after rumors circulated of an unprecedented 80% price increase across its entire memory product lineup. Taiwan's United Daily News reported that Samsung and several memory module manufacturing partners have stated that the 80% figure is completely false. Nvidia maintains a stronger price trend over the short, medium and long terms with a poor value ranking. Additional performance details, as per Benzinga's Edge Stock Rankings. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo Courtesy: Grand Warszawski on Shutterstock.com AMDAdvanced Micro Devices Inc $258.82-0.33% Overview NVDANVIDIA Corp $187.25-0.22% SSNLFSamsung Electronics Co Ltd $42.48-% Market News and Data brought to you by Benzinga APIs
[7]
Samsung Set to Be Among the First to Feature HBM4 in NVIDIA's Vera Rubin AI Lineup, Having Reportedly Passed All Verification Stages
Samsung's HBM4 modules are expected to be featured in NVIDIA's Vera Rubin AI lineup as soon as June, as the company sees a massive breakthrough with its memory business. With Industry's Fastest Pin Speeds, Samsung's HBM4 Modules Have Taken a Lead Over Counterparts HBM4 is known as a 'revolutionary' offering from memory giants, thanks to the innovations it brings to the module. We'll discuss HBM4 in detail ahead, but Samsung has actually managed to 'turn the tables' with its HBM business, as the company was struggling to acquire customers a few quarters ago and even faced rejection from NVIDIA. However, according to recent reports from Korean media, Samsung's HBM4 modules are now the first in line for adoption by NVIDIA for its Vera Rubin AI lineup, with supply expected as soon as next month. There are several reasons why Samsung's HBM4 process stands out from the rest, but one of the biggest differentiators is the Korean giant's offer of the highest pin speeds. Samsung's HBM4 is rated at 11 Gbps+, which is much higher than the JEDEC standard, mainly because it was a key requirement from NVIDIA. With agentic AI being the next big avenue, Vera Rubin has seen a massive upgrade in memory specifications, and this is mainly driven by the integration of Samsung's HBM4 modules, which feature superior speeds and interface width. Another interesting point with Samsung's HBM4 is that the firm employs a logic base die (4nm) that is sourced from the company's internal foundry, and this gives them the room to guarantee NVIDIA supply with adequate delivery timings, relative to SK hynix and Micron, which plans to source their logic dies from TSMC. Considering how quickly NVIDIA has brought in Vera Rubin into "full production", it is important for suppliers to keep up the pace, and Samsung has apparently done just that. It is diclosed that customer shipments around Vera Rubin start from August, and that Rubin AI chips will be displayed entirely at GTC 2026, where Samsung's HBM4 module will also see the spotlight. Follow Wccftech on Google to get more of our news coverage in your feeds.
[8]
Samsung to start HBM4 deliveries to Nvidia next month, reports show By Investing.com
Investing.com-- Samsung Electronics (KS:005930) is set to begin shipping its next-generation HBM4 high-bandwidth memory chips to Nvidia (NASDAQ:NVDA) in February, ahead of several rivals, South Korean media reported on Monday, citing industry sources. The Korea Economic Daily said Samsung has passed qualification tests for its sixth-generation HBM4 with Nvidia and Advanced Micro Devices and will start shipments next month, marking an early foray into the supply of advanced AI memory. Samsung, long trying to catch up with market leader SK Hynix (KS:000660) in high-performance memory, plans production of HBM4 from next month, reports showed. HBM4 chips are critical components for high-performance artificial intelligence accelerators, offering significant gains in data bandwidth and efficiency. Seoul-listed Samsung shares traded flat at the time of writing, after rising as much as 2.8% to 156,400 won earlier in the day, just below record highs of 157,000 won.
Share
Share
Copy Link
Samsung is close to securing Nvidia certification for its HBM4 AI memory chips, entering final qualification phases after supplying initial samples in September. The company prepares for mass production in February, aiming to debut alongside Nvidia's Rubin AI platform at GTC 2026. Meanwhile, SK Hynix reports significant progress in its own HBM4 tests, setting up an intense competition between South Korean rivals.
Samsung Electronics is nearing a critical milestone in its effort to reclaim leadership in AI memory, entering the final qualification phase with Nvidia for its HBM4 chips after supplying initial samples in September
1
. The Suwon, South Korea-based company is preparing for mass production in February 2026, with shipments expected to follow shortly after, though exact timing remains undisclosed1
. Samsung's HBM4 operates at 11.7Gb/s, significantly exceeding Nvidia's stated requirements of 10Gb/s and surpassing the JEDEC standard of 8Gb/s4
5
. The high-bandwidth memory passed verification without requiring redesign, even after customers requested performance enhancements, demonstrating the technological completeness of Samsung's design4
.
Source: TechRadar
Samsung shares gained as much as 3.2% in Seoul following reports of the progress, while SK Hynix stock declined by approximately the same amount
1
. The company has reportedly cleared final quality evaluations for HBM4 from both Nvidia and AMD, positioning itself to supply components for Nvidia's upcoming flagship Rubin processors4
.Samsung's HBM4 is already integrated into Nvidia's Rubin demonstration platforms, with units mass-produced in February set to reach Nvidia for performance demonstrations at the March GTC 2026 conference
4
2
. The collaboration emphasizes system-level AI performance rather than optimizing individual components in isolation, with Samsung and Nvidia coordinating memory supply with chip production to reduce timing uncertainty2
.The modules incorporate a logic base die produced using Samsung's 4nm process, giving the company greater control over manufacturing and delivery schedules compared to suppliers relying on external foundries
2
. Within Rubin-based servers, HBM4 is paired with high-speed SSD storage to handle large datasets and limit data movement bottlenecks, reflecting attention on memory bandwidth as a primary constraint for advanced AI accelerators2
. Full-scale supply of HBM4 in large volumes is projected for around June, with early customer shipments expected from August2
4
.While Samsung advances toward certification, SK Hynix has also made significant progress in Nvidia's extensive HBM4 qualification tests for Rubin AI GPUs
3
. Industry sources report that SK Hynix achieved meaningful results in Nvidia's HBM4 System-in-Package testing earlier this month, after starting the Customer Sample certification process in October 20253
. When defects were found in some circuits, SK Hynix made modifications and delivered improved HBM4 memory chips to Nvidia, with these optimized products now very close to being ready for mass production3
.
Source: TweakTown
The new HBM4 memory chips from SK Hynix achieve 10Gbps under general environments and hit 9-10Gbps under Nvidia's rigorous test conditions for temperature, humidity, and impact
3
. An industry official stated that SK Hynix still enjoys Nvidia's trust, with Nvidia allocating a large volume of HBM4 supply to the company, noting that SK Hynix likely met requirements in various evaluation categories beyond just bandwidth3
. During its Q4 2025 earnings call, SK Hynix stated it aims for an overwhelming market share in HBM4, just as with previous HBM3 and HBM3E generations3
.Related Stories
Samsung trails SK Hynix and Micron Technology at the forefront of AI memory, but all three memory chip manufacturers have seen shares surge dramatically in recent weeks as the AI rush produces a shortage of memory for the broader electronics industry
1
. Between them, the three leading companies have gained roughly $900 billion in market value since the start of September1
. Nvidia has leaned most heavily on SK Hynix for the most sophisticated memory chips it pairs with its top-of-the-line AI accelerators, making Samsung's potential certification a notable shift in the supply chain1
.
Source: TweakTown
The semiconductor industry indicates that Samsung's memory technology has stabilized with this HBM4 shipment, addressing previous technological gaps with competitors evident during the HBM3 and HBM3E phases
4
. Samsung completely overhauled its HBM and semiconductor division in recent years, with results now materializing5
. The company uses 6th-generation 10nm-class DRAM and a 4nm logic base die in its HBM4 design, with plans to continue delivering next-generation memory solutions including HBM, GDDR, and SOCAMM memory, as well as foundry services5
. HBM4 integrates directly into AI accelerators such as Nvidia's Rubin, linking its availability to customers' schedules for next-generation chip mass production, with Samsung adjusting shipment volumes to align with these timelines and specified quantities4
. With incredibly high bandwidth and energy efficiency, Samsung's advanced HBM solutions are expected to help accelerate the development of future AI applications and form a critical foundation for manufacturing infrastructure driven by these technologies5
.Summarized by
Navi
[3]
10 Feb 2026•Technology

02 Jan 2026•Technology

19 Mar 2025•Technology

1
Technology

2
Business and Economy

3
Technology
