6 Sources
[1]
Micron starts to ship samples of HBM4 memory to clients -- 36 GB capacity and bandwidth of 2 TB/s
Micron has started shipping samples of its next-generation HBM4 memory to key customers, the company announced this week. The new memory assemblies for next-generation AI and HPC processors feature a 36 GB capacity and offer bandwidth of 2 TB/s. Micron's HBM4 first samples are 12-High devices packing 36 GB of memory and featuring a 2048-bit wide interface as well as a data transfer rate of around 7.85 GT/s. The samples rely on 24GB DRAM devices made on the company's 1ß (1-beta) DRAM process technology as well as logic base dies produced by TSMC using its 12FFC+ (2nm-class) or N5 (5nm-class) logic process technology. Micron's current-generation HBM3E memory also offers capacities of up to 36 GB, but it features a 1024-bit interface and a data transfer rate of up to 9.2 GT/s, thus providing peak bandwidth of up to 1.2 TB/s. That said, the new HBM4 from Micron can boast an over 60% higher bandwidth as well as up to 20% higher energy efficiency. In addition, Micron's HBM4 also includes a built-in memory test feature to simplify integration for partners. Micron is the industry's first DRAM maker to officially start sampling of HBM4 memory modules with partners, though expect other manufacturers like Samsung and SK hynix to catch up shortly. Micron and other makers of memory intend to start volume production of HBM4 sometime in 2026, when leading developers of AI processors start volume production of their next-generation processors. It is expected that Nvidia's codenamed Vera Rubin GPUs for datacenters will be among the first products to adopt HBM4 in late 2026, though they will certainly not be the only AI and HPC processors that use HBM4. "Micron HBM4's performance, higher bandwidth and industry-leading power efficiency are a testament to our memory technology and product leadership," said Raj Narasimhan, senior vice president and general manager of Micron's Cloud Memory Business Unit. "Building on the remarkable milestones achieved with our HBM3E deployment, we continue to drive innovation with HBM4 and our robust portfolio of AI memory and storage solutions. Our HBM4 production milestones are aligned with our customers' next-generation AI platform readiness to ensure seamless integration and volume ramp."
[2]
HBM4 Memory by Micron: 36GB Capacity and 2TB/s Bandwidth Samples Shipping Soon
Micron is preparing to ship early samples of its new HBM4 memory modules, which stack 12 DRAM dies to offer 36 GB of capacity per module. These engineering samples will be sent to select partners soon, with mass production expected to start in early 2026. The HBM4 modules are built on Micron's 1β process, a DRAM manufacturing technology that's been in use since 2022. Later this year, Micron plans to introduce an EUV-enhanced 1γ process for DDR5 memory, but HBM4 currently relies on the established 1β node. One of the main improvements with HBM4 is the wider memory interface. The interface width doubles from 1,024 bits per stack in previous generations to 2,048 bits, allowing the modules to deliver a sustained bandwidth of 2 terabytes per second. This translates to roughly a 20% efficiency increase over the current HBM3E memory standard. The increased number of stacked dies combined with this wider interface makes data movement more efficient, which is especially important for multi-chip configurations and setups requiring memory-coherent interconnects. NVIDIA and AMD are expected to be the first companies to adopt HBM4 memory in their products. NVIDIA plans to use the modules in its Rubin-Vera AI accelerators, which are set to launch in the second half of 2026. Meanwhile, AMD will likely integrate HBM4 into its next-generation Instinct MI400 series, with more information anticipated at their Advancing AI 2025 conference. The added memory capacity and bandwidth of HBM4 address the increasing performance needs of generative AI, high-performance computing, and other data-heavy applications. As Micron moves towards large-scale production, the company will face challenges around thermal management and proving real-world performance. Thermal performance is critical since higher stack counts and increased bandwidth can generate more heat. Real-world benchmarks will also be essential to demonstrate how effectively HBM4 supports demanding AI workloads and HPC tasks. Source: Micron
[3]
Micron begins shipping HBM4 memory to key partners for next-gen AI systems
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you. Micron has announced that it has started shipping its new HBM4 36GB 12-Hi memory to "multiple key customers" with the first to likely be NVIDIA and its next-gen Rubin R100 AI GPU. In a press release, the US-based memory maker said it's now shipping HBM4 36GB 12-Hi memory to select customers, built on its well-established 1b DRAM process node, 12-high advanced packaging technology, and highly capable memory built-in self-test (MBIST) feature. Micron says its new HBM4 provides seamless integration for customers and partners developing next-generation AI platforms. Micron's new HBM4 has a 2048-bit memory interface that pushes more than 2.0TB/sec per memory stack, and over 60% more performance over its previous-gen HBM3 memory. These expansions in speed are perfect for AI, which is where HBM4 will lie: with AI GPUs inside of servers. Micron says its new HBM4 memory is over 20% more power efficiency compared to its HBM3E memory, with the improvement providing maximum throughput with the lowest power consumption to maximize data center efficiency. Raj Narasimhan, senior vice president and general manager of Micron's Cloud Memory Business Unit, explains: "Micron HBM4's performance, higher bandwidth and industry-leading power efficiency are a testament to our memory technology and product leadership. Building on the remarkable milestones achieved with our HBM3E deployment, we continue to drive innovation with HBM4 and our robust portfolio of AI memory and storage solutions. Our HBM4 production milestones are aligned with our customers' next-generation AI platform readiness to ensure seamless integration and volume ramp".
[4]
Micron Ships Next-Gen HBM4 Memory To Key Customers: Offering 36 GB Capacity & Over 2 TB/s Bandwidth
Micron is sampling next-gen HBM4 memory to key customers, bringing high-performance speeds and massive capacities to AI platforms. Micron HBM4 Memory Sampling Commences To Key Customers: 12-Hi Solution First With 36 GB Capacity & Over 2 TB/s Speeds Press Release: Micron Technology announced the shipment of HBM4 36GB 12-high samples to multiple key customers today. This milestone extends Micron's leadership in memory performance and power efficiency for AI applications. Built on its well-established 1ß (1-beta) DRAM process, proven 12-high advanced packaging technology, and highly capable memory built-in self-test (MBIST) feature, Micron HBM4 provides seamless integration for customers and partners developing next-generation AI platforms. A leap forward As the use of generative AI continues to grow, the ability to effectively manage inference becomes more important. Micron HBM4 features a 2048-bit interface, achieving speeds of greater than 2.0 TB/s per memory stack and more than 60% better performance over the previous generation. This expanded interface facilitates rapid communication and a high-throughput design that accelerates the inference performance of large language models and chain-of-thought reasoning systems. Simply put, HBM4 will help AI accelerators respond faster and reason more effectively. Additionally, Micron HBM4 features over 20% better power efficiency compared to Micron's previous-generation HBM3E products, which first established new, unrivaled benchmarks in HBM power efficiency in the industry. This improvement provides maximum throughput with the lowest power consumption to maximize data center efficiency. Generative AI use cases continue to multiply, and this transformative technology is poised to deliver significant benefits to society. HBM4 is a crucial enabler, driving quicker insights and discoveries that will foster innovation in diverse fields such as healthcare, finance, and transportation. Intelligence Accelerated: Micron's role in the AI revolution For nearly five decades, Micron has pushed the boundaries of memory and storage innovation. Today, Micron continues to accelerate AI by delivering a broad portfolio of solutions that turn data into intelligence, fueling breakthroughs from the data center to the edge. With HBM4, Micron reinforces its position as a critical catalyst for AI innovation and a reliable partner for our customers' most demanding solutions. Micron plans to ramp HBM4 in calendar year 2026, aligned to the ramp of customers' next-generation AI platforms.
[5]
Micron Ships Advanced HBM4 Memory To Fuel Next-Generation AI - Micron Technology (NASDAQ:MU)
Micron Technology MU on Tuesday announced the shipment of HBM4 36GB 12-high samples to multiple key customers. Built on its 1ß (1-beta) DRAM process, 12-high advanced packaging technology, and memory built-in self-test (MBIST) feature, Micron HBM4 provides seamless integration for customers and partners developing next-generation AI platforms. Micron HBM4 features a 2048-bit interface, achieving speeds greater than 2.0 TB/s per memory stack and more than 60% better performance than the previous generation. Also Read: ASML Bets Big On High-Powered Chip Tools As Taiwan Semiconductor, Intel, Micron Line Up This expanded interface facilitates rapid communication and a high-throughput design that accelerates the inference performance of large language models and chain-of-thought reasoning systems. Additionally, Micron HBM4 features over 20% better power efficiency compared to Micron's previous-generation HBM3E products to maximize data center efficiency. HBM4 is a crucial enabler, driving quicker insights and discoveries to foster innovation in diverse fields such as healthcare, finance, and transportation, the company said in a press release. Micron plans to ramp HBM4 in 2026, aligned with the ramp of customers' next-generation AI platforms. Last week, Micron announced that it had started shipping samples of its new 1-gamma LPDDR5X memory, built for AI smartphone use. Micron Technology stock has been down by over 15% in the last 12 months, as the Trump administration's tariff policies have affected the company. In May, CNBC's Jim Cramer said Micron Technology stock was overpriced, making it unsuitable for investment. Price Action: MU stock is trading higher by 2.10% to $113.25 at last check Tuesday. Read Next: These Semiconductor Stocks To Face Least Tariffs Impact Compared To Intel Photo via Shutterstock MUMicron Technology Inc$114.783.45%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum39.15Growth68.39Quality79.66Value61.50Price TrendShortMediumLongOverview This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[6]
Micron Ships HBM4 to Key Customers to Power Next-Gen AI Platforms
Micron HBM4 36GB 12-high products lead the industry in power efficiency for data center and cloud AI acceleration A Media Snippet accompanying this announcement is available in this link. BOISE, Idaho, June 10, 2025 (GLOBE NEWSWIRE) -- The importance of high-performance memory has never been greater, fueled by its crucial role in supporting the growing demands of AI training and inference workloads in data centers. Micron Technology, Inc. (Nasdaq: MU), today announced the shipment of HBM4 36GB 12-high samples to multiple key customers. This milestone extends Micron's leadership in memory performance and power efficiency for AI applications. Built on its well-established 1ß (1-beta) DRAM process, proven 12-high advanced packaging technology and highly capable memory built-in self-test (MBIST) feature, Micron HBM4 provides seamless integration for customers and partners developing next-generation AI platforms. A leap forward As use of generative AI continues to grow, the ability to effectively manage inference becomes more important. Micron HBM4 features a 2048-bit interface, achieving speeds greater than 2.0 TB/s per memory stack and more than 60% better performance over the previous generation. This expanded interface facilitates rapid communication and a high-throughput design that accelerates the inference performance of large language models and chain-of-thought reasoning systems. Simply put, HBM4 will help AI accelerators respond faster and reason more effectively. Additionally, Micron HBM4 features over 20% better power efficiency compared to Micron's previous-generation HBM3E products, which first established new, unrivaled benchmarks in HBM power efficiency in the industry. This improvement provides maximum throughput with the lowest power consumption to maximize data center efficiency. Generative AI use cases continue to multiply, and this transformative technology is poised to deliver significant benefits to society. HBM4 is a crucial enabler, driving quicker insights and discoveries that will foster innovation in diverse fields such as healthcare, finance and transportation. "Micron HBM4's performance, higher bandwidth and industry-leading power efficiency are a testament to our memory technology and product leadership," said Raj Narasimhan, senior vice president and general manager of Micron's Cloud Memory Business Unit. "Building on the remarkable milestones achieved with our HBM3E deployment, we continue to drive innovation with HBM4 and our robust portfolio of AI memory and storage solutions. Our HBM4 production milestones are aligned with our customers' next-generation AI platform readiness to ensure seamless integration and volume ramp." Intelligence Accelerated: Micron's role in the AI revolution For nearly five decades, Micron has pushed the boundaries of memory and storage innovation. Today, Micron continues to accelerate AI by delivering a broad portfolio of solutions that turn data into intelligence, fueling breakthroughs from the data center to the edge. With HBM4, Micron reinforces its position as a critical catalyst for AI innovation and a reliable partner for our customers' most demanding solutions. Micron plans to ramp HBM4 in calendar year 2026, aligned to the ramp of customers' next-generation AI platforms. For more information on Micron HBM4, visit https://www.micron.com/products/memory/hbm. Additional resources: About Micron Technology, Inc. Micron Technology, Inc. is an industry leader in innovative memory and storage solutions, transforming how the world uses information to enrich life for all. With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micronand Crucialbrands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence (AI) and compute-intensive applications that unleash opportunities -- from the data center to the intelligent edge and across the client and mobile user experience. To learn more about Micron Technology, Inc. (Nasdaq: MU), visit micron.com. © 2025 Micron Technology, Inc. All rights reserved. Information, products, and/or specifications are subject to change without notice. Micron, the Micron logo, and all other Micron trademarks are the property of Micron Technology, Inc. All other trademarks are the property of their respective owners. Based on internal Micron HBM4 testing and published HBM3E specifications (2.0 TB/s vs. 1.2 TB/s). Based on internal Micron simulation projections in comparison to Micron HBM3E 36GB 12-high and similar competitive products.
Share
Copy Link
Micron has begun shipping samples of its next-generation HBM4 memory to key customers, offering 36 GB capacity and 2 TB/s bandwidth, setting new standards for AI and HPC applications.
Micron Technology has announced a significant breakthrough in memory technology with the commencement of shipping samples of its next-generation High Bandwidth Memory 4 (HBM4) to key customers. This development marks a substantial leap forward in the realm of artificial intelligence (AI) and high-performance computing (HPC) 1.
Source: Guru3D.com
The new HBM4 memory modules boast impressive specifications:
These specifications represent a significant improvement over the current-generation HBM3E memory, offering over 60% higher bandwidth and up to 20% better energy efficiency 2.
Micron's HBM4 is built on the company's 1β (1-beta) DRAM process technology, which has been in use since 2022. The logic base dies are produced by TSMC using its 12FFC+ (2nm-class) or N5 (5nm-class) logic process technology 3.
A notable feature of the HBM4 is its built-in memory test capability, which simplifies integration for partners. This, coupled with Micron's advanced packaging technology, ensures seamless integration for customers developing next-generation AI platforms 4.
The expanded interface and high-throughput design of HBM4 are expected to accelerate the inference performance of large language models and chain-of-thought reasoning systems. This advancement is crucial for the growing field of generative AI, where effective management of inference is becoming increasingly important 5.
Source: Benzinga
While Micron is the first to officially start sampling HBM4 memory modules, other manufacturers like Samsung and SK hynix are expected to follow suit. The industry anticipates volume production of HBM4 to commence in 2026, aligning with the production schedules of next-generation AI processors 1.
Nvidia's codenamed Vera Rubin GPUs for datacenters are expected to be among the first products to adopt HBM4 in late 2026. AMD is also likely to integrate HBM4 into its next-generation Instinct MI400 series 2.
As Micron moves towards large-scale production, the company will face challenges related to thermal management and proving real-world performance. The increased stack count and bandwidth can generate more heat, making thermal performance a critical factor 2.
Micron's development of HBM4 is part of its broader strategy to accelerate AI innovation. The company plans to introduce an EUV-enhanced 1γ process for DDR5 memory later this year, further expanding its portfolio of AI memory and storage solutions 4.
Google introduces Search Live, an AI-powered feature enabling back-and-forth voice conversations with its search engine, enhancing user interaction and information retrieval.
15 Sources
Technology
1 day ago
15 Sources
Technology
1 day ago
Microsoft is set to cut thousands of jobs, primarily in sales, as it shifts focus towards AI investments. The tech giant plans to invest $80 billion in AI infrastructure while restructuring its workforce.
13 Sources
Business and Economy
1 day ago
13 Sources
Business and Economy
1 day ago
Apple's senior VP of Hardware Technologies, Johny Srouji, reveals the company's interest in using generative AI to accelerate chip design processes, potentially revolutionizing their approach to custom silicon development.
11 Sources
Technology
16 hrs ago
11 Sources
Technology
16 hrs ago
Midjourney, known for AI image generation, has released its first AI video model, V1, allowing users to create short videos from images. This launch puts Midjourney in competition with other AI video generation tools and raises questions about copyright and pricing.
10 Sources
Technology
1 day ago
10 Sources
Technology
1 day ago
A new study reveals that AI reasoning models produce significantly higher COâ‚‚ emissions compared to concise models when answering questions, highlighting the environmental impact of advanced AI technologies.
8 Sources
Technology
8 hrs ago
8 Sources
Technology
8 hrs ago