Micron Begins Shipping HBM4 Memory: A Leap Forward for AI and High-Performance Computing

5 Sources

Micron has started shipping samples of its new HBM4 memory to key customers, offering 36GB capacity and over 2TB/s bandwidth. This next-generation memory technology promises significant improvements for AI and high-performance computing applications.

Micron's HBM4 Memory: A New Frontier in AI and High-Performance Computing

Micron Technology has announced a significant milestone in memory technology with the shipment of its new High Bandwidth Memory 4 (HBM4) samples to key customers 1. This next-generation memory solution promises to revolutionize AI and high-performance computing applications with its impressive specifications and performance improvements.

Technical Specifications and Improvements

Source: TweakTown

Source: TweakTown

The HBM4 modules boast a capacity of 36GB per stack, achieved by stacking 12 DRAM dies 1. One of the most significant advancements is the wider memory interface, which has doubled from 1,024 bits per stack in previous generations to 2,048 bits 1. This expansion allows the modules to deliver a sustained bandwidth exceeding 2 terabytes per second, marking a substantial leap in performance 2.

Micron reports that HBM4 offers more than 60% better performance over the previous generation 3. Additionally, it boasts over 20% better power efficiency compared to Micron's previous-generation HBM3E products 5. These improvements are crucial for maximizing data center efficiency and supporting the growing demands of AI workloads.

Applications and Industry Impact

The primary applications for HBM4 are in AI acceleration and high-performance computing. The increased memory capacity and bandwidth address the escalating performance needs of generative AI, large language models, and other data-intensive applications 1. Raj Narasimhan, senior vice president at Micron, emphasized that HBM4 will help AI accelerators respond faster and reason more effectively 5.

Industry Adoption and Future Outlook

Source: Benzinga

Source: Benzinga

NVIDIA and AMD are expected to be among the first companies to adopt HBM4 memory in their products 1. NVIDIA plans to use the modules in its Rubin-Vera AI accelerators, set to launch in the second half of 2026 1. AMD is likely to integrate HBM4 into its next-generation Instinct MI400 series 1.

Micron plans to ramp up HBM4 production in 2026, aligning with the launch of customers' next-generation AI platforms 5. As the company moves towards large-scale production, it will face challenges in thermal management and demonstrating real-world performance, particularly in supporting demanding AI workloads and HPC tasks 1.

Explore today's top stories

AMD Unveils Next-Generation AI Chips, Challenging Nvidia's Dominance

AMD CEO Lisa Su reveals new MI400 series AI chips and partnerships with major tech companies, aiming to compete with Nvidia in the rapidly growing AI chip market.

Reuters logoCNBC logoInvestopedia logo

8 Sources

Technology

2 hrs ago

AMD Unveils Next-Generation AI Chips, Challenging Nvidia's

Meta Takes Legal Action Against AI 'Nudify' App Developer in Crackdown on Deepfake Nudes

Meta has filed a lawsuit against Joy Timeline HK Limited, the developer of the AI 'nudify' app Crush AI, for repeatedly violating advertising policies on Facebook and Instagram. The company is also implementing new measures to combat the spread of AI-generated explicit content across its platforms.

TechCrunch logoThe Verge logoPC Magazine logo

17 Sources

Technology

10 hrs ago

Meta Takes Legal Action Against AI 'Nudify' App Developer

Mattel and OpenAI Join Forces to Revolutionize Toy Industry with AI Integration

Mattel, the iconic toy manufacturer, partners with OpenAI to incorporate artificial intelligence into toy-making and content creation, promising innovative play experiences while prioritizing safety and privacy.

TechCrunch logoBloomberg Business logoReuters logo

14 Sources

Business and Economy

10 hrs ago

Mattel and OpenAI Join Forces to Revolutionize Toy Industry

Zero-Click AI Vulnerability "EchoLeak" Exposes Microsoft 365 Copilot Data

A critical security flaw named "EchoLeak" was discovered in Microsoft 365 Copilot, allowing attackers to exfiltrate sensitive data without user interaction. The vulnerability highlights potential risks in AI-integrated systems.

The Hacker News logoBleeping Computer logoSiliconANGLE logo

5 Sources

Technology

18 hrs ago

Zero-Click AI Vulnerability "EchoLeak" Exposes Microsoft

Multiverse Computing Raises $217M for Revolutionary AI Model Compression Technology

Spanish AI startup Multiverse Computing secures $217 million in funding to advance its quantum-inspired AI model compression technology, promising to dramatically reduce the size and cost of running large language models.

Reuters logoCrunchbase News logoSiliconANGLE logo

5 Sources

Technology

10 hrs ago

Multiverse Computing Raises $217M for Revolutionary AI
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Twitter logo
Instagram logo
LinkedIn logo