Thermodynamic computing could slash AI image generation energy use by ten billion times

Reviewed byNidhi Govil

3 Sources

Share

Scientists at Lawrence Berkeley National Laboratory have demonstrated that thermodynamic computing could generate AI images using one ten-billionth the energy of current tools like DALL-E and Midjourney. The breakthrough research shows promise for addressing the high energy consumption of generative AI, though significant hardware development challenges remain before the technology can rival existing models.

Thermodynamic Computing Tackles the High Energy Consumption of Generative AI

Generative AI image tools like DALL-E, Midjourney, and Stable Diffusion have transformed how we create visual content, but they demand enormous amounts of power. Now, groundbreaking research suggests thermodynamic computing could generate images using one ten-billionth the energy of current digital systems

1

. Stephen Whitelam, a staff scientist at Lawrence Berkeley National Laboratory, and his colleague Corneel Casert published findings in Nature Communications on January 10 demonstrating it was possible to create a thermodynamic version of neural networks

2

. This lays the foundation for energy-efficient AI images that could dramatically reduce the energy consumption tied to machine learning tasks.

Source: TechRadar

Source: TechRadar

How Thermodynamic Computing Performs Low-Energy Computations

Unlike traditional digital computers that rely on fixed circuits and precise calculations, thermodynamic computing employs physical circuits that respond to noise from thermal fluctuations in the environment

1

. A prototype chip from New York-based startup Normal Computing illustrates this approach: eight resonators connected through special couplers form a customizable calculator. Programmers pluck the resonators to introduce noise into the network, and as the system reaches equilibrium, the solution emerges in the new configuration

1

. This method harnesses nature's randomness rather than fighting it, enabling massive energy savings compared to energy-intensive digital neural networks.

AI Image Generation Through Natural Decay and Reconstruction

Whitelam's approach to AI image generation involves giving a thermodynamic computer a set of images, then allowing them to degrade naturally as random interactions between components run until equilibrium is achieved

3

. The system then calculates the probability of reversing this decay process and adjusts coupling values to maximize that likelihood. In simulations published January 20 in Physical Review Letters, this training process successfully generated images of handwritten digits without requiring energy-intensive digital neural networks or pseudorandom number generators

1

. The process resembles how diffusion models work—gradually adding noise until images resemble static on an analog television, then reversing the process—but uses physical energy flows instead of digital computations

1

.

Source: IEEE

Source: IEEE

Hardware Development Challenges and Future Implications

While the energy cost advantages appear substantial, Whitelam cautions that current prototypes remain rudimentary compared to existing systems. "We don't yet know how to design a thermodynamic computer that would be as good at image generation as, say, DALL-E," he told IEEE. "It will still be necessary to work out how to build the hardware to do this"

1

. The world's first thermodynamic computing chip reached tape out last year, but scaling from simple handwritten digits to the complex outputs of tools like Google Gemini's image generators requires entirely new hardware designs

2

. Whitelam acknowledges that near-term designs will likely fall somewhere between the theoretical ideal and current digital power levels

1

.

Why This Matters for Data Center Growth and Energy Efficiency

As AI buildouts and data center growth place unprecedented strain on global energy supply, the potential to reduce the energy consumption of machine learning by such dramatic factors becomes critical

2

. "This research suggests that it's possible to make hardware to do certain types of machine learning—here, image generation—with considerably lower energy cost than we do at present," Whitelam explains

3

. If successful at scale, energy-efficient neural networks based on thermodynamic principles could transform how we approach AI infrastructure, making advanced capabilities accessible without the current environmental and economic costs. The research proves that physical systems can perform basic machine learning tasks in fundamentally new ways, opening pathways for future innovation even as significant technical hurdles remain.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo