John Carmack proposes fiber optic cache to replace DRAM for streaming AI model weights

Reviewed byNidhi Govil

3 Sources

Share

John Carmack, cofounder of id Software, has proposed using 200km fiber optic loops as a cache alternative to traditional DRAM for AI workloads. The concept leverages 256 Tb/s data rates to keep 32GB of data continuously streaming to AI accelerators, potentially offering significant power savings and addressing the ongoing memory crisis affecting AI development and consumer hardware costs.

John Carmack Envisions Radical Shift in AI Memory Architecture

John Carmack, the legendary cofounder of id Software and creator of Doom, has sparked intense discussion in the tech community with a provocative proposal: using long fiber optic lines as a cache system for streaming AI model weights, effectively replacing traditional DRAM in certain AI workloads

1

. The concept addresses a critical challenge as AI accelerators increasingly hit data movement bottlenecks before reaching their computational limits.

Source: Tom's Hardware

Source: Tom's Hardware

Carmack's fiber optic cache idea centers on "data in flight" — leveraging the propagation delay in optical fiber as functional storage capacity

3

. With single-mode fiber achieving 256 Tb/s data rates over 200km distances, his calculations reveal that approximately 32GB of data exists within the cable at any given moment, with a staggering 32TB/s bandwidth

1

. This creates what he describes as a recycling fiber loop that could continuously stream data into an L2 cache, keeping AI accelerators constantly fed without relying on power-hungry DRAM.

Source: TechRadar

Source: TechRadar

Delay-Line Memory Concept Returns for Modern AI Challenges

The proposal represents a modern revival of delay-line memory, a technology dating back to mid-20th century computing when mercury was used as a medium and soundwaves carried data

1

. Alan Turing himself once proposed using a gin mixture as an alternative medium. The concept works for AI because model weights can be accessed sequentially for inference and nearly so for training, making deterministic weight reference patterns possible

2

.

The primary advantage lies in power savings. Maintaining DRAM requires substantial energy for constant refresh cycles, while managing light through fiber demands far less power

1

. Carmack suggests that "fiber transmission may have a better growth trajectory than DRAM," a particularly relevant observation given the ongoing memory crisis

2

. However, optical amplifiers and Digital Signal Processors (DSPs) could partially offset these energy gains.

Practical Hurdles and Near-Term Alternatives

While theoretically compelling, the fiber optic cache faces significant implementation challenges. The sheer logistics of managing 200km of fiber, maintaining signal strength throughout the loop, and the considerable cost present immediate obstacles

2

. More fundamentally, the system only functions when workloads match the stream timing. Real AI deployments involve variability in batching, kernel scheduling, and model architecture that complicate purely sequential access patterns

3

.

Recognizing these limitations, Carmack also highlighted a more practical near-term solution: ganging flash memory chips together to provide massive read bandwidth, as long as operations are done a page at a time and carefully pipelined

1

. This approach would require flash and AI accelerator vendors to agree on a high-speed interface, but given the massive investment in AI infrastructure, such standardization appears feasible

2

.

Research Already Exploring Similar Memory Architectures

Variations on these concepts have already been explored in academic research. Projects including Behemoth from 2021, FlashGNN and FlashNeuron from the same year, and more recently the Augmented Memory Grid have investigated alternative memory architectures for AI workloads

1

. These approaches align with industry trends toward rethinking memory hierarchies specifically for AI at scale, moving away from traditional PC-like architectures toward bandwidth-first pipelines

3

.

The timing of Carmack's proposal matters. The memory crisis, driven by AI's insatiable appetite for RAM, has created supply shortages that inflate costs for consumers purchasing everything from graphics cards to complete systems

2

. If AI workloads could be shifted to alternative memory solutions, it would relieve pressure on traditional DRAM markets. The crisis is forecast to persist through this year and potentially beyond, making alternative approaches increasingly attractive.

Whether fiber loops become practical datacenter components remains uncertain. But Carmack's thought experiment underscores a critical point: as AI accelerators grow more powerful, feeding them data efficiently becomes the limiting factor. Future systems may prioritize predictable, high-throughput streaming over random access flexibility, fundamentally changing how we architect AI memory systems. The conversation his proposal sparked — drawing responses from industry leaders and researchers — suggests the tech community recognizes that conventional memory hierarchies may not scale indefinitely for AI workloads. Watch for continued experimentation with flash-based solutions and new interface standards that could materialize sooner than exotic fiber configurations.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo