3 Sources
3 Sources
[1]
Micron teams up with TSMC to deliver HBM4E, targeted for 2027 -- collaboration could enable further customization
Micron has confirmed it will partner with TSMC to manufacture the base logic die for its next-generation HBM4E memory, with production targeted for 2027. The announcement, made during the company's fiscal Q4 earnings call on September 23, adds yet more detail to an already busy roadmap. Micron is shipping early HBM4 samples at speeds above 11 Gbps per pin, providing up to 2.8TB/s of bandwidth, and it has already locked down most of its 2026 HBM3E supply agreements. But the big takeaway is that Micron will hand TSMC the task of fabricating both standard and custom HBM4E logic dies, opening the door to tailored memory solutions for AI workloads. The decision also places Micron squarely in the middle of the next wave of AI system design, aligning with previous reporting on HBM roadmaps across Micron, SK hynix, and Samsung, and with earlier analysis of how Micron views HBM4E as a platform for customization. The industry is already familiar with the HBM cadence: HBM3E today, HBM4 in 2025-2026, and HBM4E around 2027, and each new generation brings higher per-pin data rates and taller stacks. SK hynix has already confirmed 12-Hi HBM4 with a full 2048-bit interface running at 10 GT/s, while Samsung is plotting similar capacities with its own logic processes. Micron is shipping its own HBM4 stacks and claims more than 20% better efficiency than HBM3E. HBM4E is the extension of that roadmap, but Micron is treating it as something more. The company highlighted that the base die will be fabricated at TSMC, not in-house, and that custom logic-die designs will be offered to customers willing to pay a premium. By opening the base die to customization, Micron is effectively turning HBM into a semi-configurable subsystem. Instead of a one-size-fits-all interface layer, GPU vendors could request additional SRAM, dedicated compression engines, or tuned signal paths. That approach mirrors what we have seen from SK hynix, which has already described customizable base dies as part of its HBM4 strategy. Given that customized memory is stickier, more profitable, and more important for customers trying to squeeze every watt and every cycle out of an AI accelerator, this is likely to become a lucrative segment of the market. The timing of Micron's plans for HBM4E looks to be no accident. Nvidia and AMD both have next-generation data center GPUs slated for 2026 that will introduce HBM4, and HBM4E looks perfectly aligned to their successors. Nvidia's Rubin architecture, expected to follow Blackwell in 2026, is built around HBM4. Rubin-class GPUs are projected to deliver around 13 TB/s of memory bandwidth and up to 288GB of capacity, a jump from the 8 TB/s ceiling on Hopper with HBM3E. A follow-on platform, Rubin Ultra, is already on Nvidia's roadmap for 2027. That platform specifically calls for HBM4E, with each GPU supporting up to a terabyte of memory and aggregate rack-level bandwidth measured in petabytes per second. AMD's trajectory is just as aggressive. Its Instinct MI400 family, expected around the same time as Rubin, is also moving to HBM4. Leaks suggest as much as 432 GB of HBM4 and 19.6 TB/s of bandwidth, more than double what AMD's MI350 delivers today. Like Rubin, MI400 uses a chiplet design, bound by ultra-wide memory buses, making HBM4 a necessity. After that is HBM4E, which is set for 2027 or 2028, depending on yields and ecosystem readiness. This cadence makes Micron's partnership with TSMC particularly important. By shifting the base die to a leading-edge logic process and offering customization, Micron can synchronize its roadmap with the needs of Rubin Ultra, MI400 successors, and whatever comes next in the accelerator space. Thinking of the bigger picture, Micron's partnership with TSMC raises questions around how HBM4E might proliferate widely into AI data centers. Right now, only the highest-end GPUs and TPUs use HBM, with the majority of servers still relying on DDR5 or LPDDR. That could change dramatically as workloads keep ballooning in size. Micron has already said that its HBM customer base has grown to six, with Nvidia among them. The company is also working with Nvidia on deploying LPDDR in servers. The partnership with TSMC suggests that Micron intends to make HBM4E a broadly adopted piece of AI infrastructure, potentially making HBM4E the standard tier of memory for AI nodes in the second half of the decade.
[2]
Micron ships 11Gbps HBM4, working on HBM4E, and is ready with GDDR7 memory at 40Gbps+
TL;DR: Micron leads in ultra-fast GDDR7 memory with pin speeds exceeding 40Gbps, surpassing NVIDIA's current 28-30Gbps in GeForce RTX 50 series GPUs. As the sole LPDRAM supplier for NVIDIA's AI servers, Micron also advances HBM4 memory and partners with TSMC for next-gen HBM4E, enhancing AI and gaming performance. Micron has teased that it's well-positioned with its GDDR7 memory, which is designed to deliver ultra-fast performance with pin speeds exceeding 40Gbps. NVIDIA's current GeForce RTX 50 series "Blackwell" gaming GPUs ship with GDDR7 memory with speeds of 28Gbps to 30Gbps, but in its recent earnings call, Micron confirmed it has 40Gbps+ GDDR7 memory ready. The company has been working in close collaboration with NVIDIA on pioneering the adoption of LPDRAM for servers, and now that NVIDIA's launched its LPDRAM in their GB (Grace Blackwell) AI server family, Micron has been the sole supplier of LPDRAM in the data center. Micron President & CEO Sanjay Mehrotra said: "In close collaboration with NVIDIA, Micron has pioneered the adoption of LPDRAM for servers, and since NVIDIA's launch of LPDRAM in their GB product family, Micron has been the sole supplier of LPDRAM in the data center. In addition to our leadership in HBM and LP5, Micron is also well-positioned with our GDDR7 products, which are designed to deliver ultra-fast performance with pin speeds exceeding 40 Gbps, along with best-in-class power efficiency to address needs of certain future AI systems". Micron took the time to announce it is also shipping the industry's fastest HBM4 memory with pin speeds of 11Gbps, and that it will be teaming with TSMC for next-gen HBM4E memory that will arrive in the future.
[3]
Micron Ships Industry's Fastest 11 Gbps HBM4 Modules, Talks TSMC Partnership For HBM4E & Over 40 Gbps GDDR7 Memory
Micron has confirmed shipping the industry's fastest 11 Gbps HBM4 DRAM to customers & TSMC partnership for next-gen HBM4E. Micron Says It Has Outperformed The Competition With Its 11 Gbps HBM4 Memory, Will Partner With TSMC For HBM4E DRAM During its latest earnings call for Q4 & FY2025, Micron announced some key developments in its DRAM and NAND Flash segments. As far as the earnings go, the company posted a revenue of $11.32 billion versus $9.30 billion in the previous quarter, while the full-year revenue grew from $25.11 billion to $37.38 billion. The company is now looking to expand its performance with next-gen solutions. Starting with HBM, Micron said that its HBM4 12-Hi DRAM solution remains on track, and to address the recent increase in performance requirements, the company has produced and shipped the first samples of its fastest HBM4 solution to date, offering over 11 Gbps pin speeds and 2.8 TB/s of bandwidth. They state that the new HBM4 products should outperform all competition in terms of performance and efficiency. We are pleased to note that our HBM share is on track to grow again and be in line with our overall DRAM share in this calendar Q3, delivering on our target that we have discussed for several quarters now. Micron Technology's HBM4 12-hi remains on track to support customer platform ramps, even as the performance requirements for HBM4 bandwidth and pin speeds have increased. We have recently shipped customer samples of our HBM4 with industry-leading bandwidth exceeding 2.8 TB/s and pin speeds over 11 Gbps. We believe Micron Technology's HBM4 outperforms all competing HBM4 products, delivering industry-leading performance as well as best-in-class power efficiency. Our proven 1-gamma DRAM, innovative and power-efficient HBM4 design, in-house advanced CMOS base die, and advanced packaging innovations are key differentiators enabling this best-in-class product. Sanjay Mehrotra - Micron President & CEO Besides HBM4, the company also talked about the next-gen HBM4E memory. Unlike HBM, which is based completely on an in-house & advanced CMOS base die, Micron will partner with TSMC for manufacturing the base logic die for HBM4E memory. This will be true for both standard and custom variants. Micron has also highlighted that they expect HBM4E to be a 2027 product. For HBM4E, Micron Technology will offer standard products as well as the option for customization of the base logic die. We are partnering with TSMC for manufacturing the HBM4E base logic die for both standard and customized products. Customization requires close collaboration with customers, and we expect HBM4E with customized base logic dies to deliver higher gross margins than standard HBM4E. Our HBM customer base has expanded and now includes six customers. We have pricing agreements with almost all customers for a vast majority of our HBM3E supply in calendar 2026. We are in active discussions with customers on the specifications and volumes for HBM4, and we expect to conclude agreements to sell out the remainder of our total HBM calendar 2026 supply in the coming months. Sanjay Mehrotra - Micron President & CEO Moving onward, Micron says that they have closely collaborated with NVIDIA for the adoption of LPDDR memory for servers, making them the sole supplier of LPDDR DRAM in the data center segment. Micron also talks about GDDR7 memory for AI and Client products. The memory is expected to exceed 40 Gbps pin speeds with future iterations and products. Currently, NVIDIA is the only GPU maker that leverages GDDR7 for its GPUs. Inititally, Micron announced GDDR7 with 32 Gbps pin speeds so 40 Gbps will mark a 25% speedup over that. n close collaboration with NVIDIA, Micron has pioneered the adoption of LPDRAM for servers, and since NVIDIA's launch of LPDRAM in their GB product family, Micron has been the sole supplier of LPDRAM in the data center. In addition to our leadership in HBM and LP5, Micron is also well-positioned with our GDDR7 products, which are designed to deliver ultra-fast performance with pin speeds exceeding 40 Gbps, along with best-in-class power efficiency to address needs of certain future AI systems. Sanjay Mehrotra - Micron President & CEO And lastly, there are developments on the 1γ DRAM node, which has reached mature yields in record time, that's 50% faster than the prior generation. The G9 NAND production ramp is also moving on nicely, with Micron stating the ramp of their G9 NAND node for both TLC and QLC NAND flash solutions. The company is the first to market PCIe Gen6 SSDs (for data centers), and should continue to offer more solutions with 16Gb 1γ DRAM.
Share
Share
Copy Link
Micron announces industry-leading 11 Gbps HBM4 memory, partners with TSMC for HBM4E development, and reveals 40+ Gbps GDDR7 memory, signaling major advancements in AI and high-performance computing.
Micron has made significant strides in the high-bandwidth memory (HBM) market, announcing the shipment of early HBM4 samples that operate at speeds exceeding 11 Gbps per pin
1
2
. This achievement translates to an impressive bandwidth of up to 2.8 TB/s, positioning Micron at the forefront of memory technology for AI and high-performance computing applications1
3
.In a move that could reshape the memory landscape, Micron has partnered with TSMC to manufacture the base logic die for its next-generation HBM4E memory
1
. This collaboration, targeting production in 2027, opens up new possibilities for customization in AI workloads1
3
. By offering both standard and custom HBM4E logic dies, Micron aims to provide tailored memory solutions that could significantly enhance AI system design1
.Micron is not only focusing on HBM but is also making waves in the graphics memory sector. The company has announced its readiness with GDDR7 memory, boasting pin speeds exceeding 40 Gbps
2
3
. This represents a substantial leap from the current 28-30 Gbps speeds found in NVIDIA's GeForce RTX 50 series GPUs, promising significant improvements in gaming and AI performance2
.The advancements in HBM4 and HBM4E align perfectly with the roadmaps of major GPU manufacturers like NVIDIA and AMD
1
. NVIDIA's upcoming Rubin architecture and AMD's Instinct MI400 family are both expected to leverage HBM4, with plans for HBM4E integration in their subsequent iterations1
. This synchronization between memory and GPU development could lead to unprecedented performance gains in AI and data center applications.Related Stories
Micron's expanded customer base for HBM, now including six major players with NVIDIA among them, underscores the company's growing influence in the AI memory market
1
3
. As the sole supplier of LPDRAM for NVIDIA's AI servers, Micron is strategically positioned to shape the future of memory in AI infrastructure2
3
.The introduction of HBM4E and advanced GDDR7 memory could potentially broaden the adoption of high-bandwidth memory solutions in AI data centers
1
. As workloads continue to grow in size and complexity, the shift from traditional DDR5 or LPDDR to more advanced memory technologies like HBM4E could become increasingly common1
.Micron's innovations, coupled with its strategic partnerships and market positioning, signal a new era in memory technology for AI and high-performance computing. As these advancements materialize over the coming years, they are likely to play a crucial role in shaping the capabilities and efficiency of next-generation AI systems and data centers.
Summarized by
Navi
[1]
11 Jun 2025•Technology
22 Dec 2024•Technology
19 Mar 2025•Technology