PNNL Researchers Develop New Method to Measure Uncertainty in AI Model Training

2 Sources

Scientists at Pacific Northwest National Laboratory have created a novel approach to quantify uncertainty in AI model training, particularly for neural network potentials. This method aims to increase trust in AI predictions for material science and chemistry applications.

News article

PNNL Researchers Tackle AI Uncertainty in Materials Science

Researchers at the Department of Energy's Pacific Northwest National Laboratory (PNNL) have developed a groundbreaking method to measure uncertainty in artificial intelligence (AI) model training, specifically for neural network potentials. This advancement aims to bridge the gap between the speed of AI predictions and the trust scientists place in their accuracy, particularly in the fields of materials science and chemistry 12.

The Challenge of AI Reliability

AI models trained on experimental and theoretical data are increasingly being used to predict material properties before physical creation and testing. This approach has the potential to revolutionize the development of medicines and industrial chemicals, significantly reducing the time and cost associated with traditional trial-and-error methods 1.

However, the reliability of these AI predictions has been a major concern. As PNNL data scientist Jenna Bilbrey Pope notes, "We noticed that some uncertainty models tend to be overconfident, even when the actual error in prediction is high" 1. This overconfidence is a common issue with deep neural networks and can lead to misplaced trust in AI predictions.

SNAP: A New Approach to Uncertainty Quantification

The PNNL team, led by Bilbrey Pope and Sutanay Choudhury, has introduced a new uncertainty quantification method as part of their Scalable Neural network Atomic Potentials (SNAP) framework. This method provides a metric that mitigates the overconfidence issue common in AI models 2.

Key features of the SNAP framework include:

  1. Ability to determine how well neural network potentials have been trained
  2. Identification of predictions outside the model's training boundaries
  3. Guidance for active learning to improve the model's performance 1

Benchmarking with MACE

To validate their method, the researchers benchmarked it against MACE, one of the most advanced foundation models for atomistic materials chemistry. They calculated the model's proficiency in determining the energy of specific material families, providing insights into which simulations can be confidently performed using AI approximations instead of time-intensive supercomputer calculations 12.

Implications for AI in Scientific Research

The development of this uncertainty quantification method has significant implications for the integration of AI into scientific workflows:

  1. Increased trust in AI predictions for materials science and chemistry
  2. Potential for creating autonomous laboratories with AI as a trusted assistant
  3. Ability to provide confidence guarantees for AI predictions, such as "85% confidence that catalyst A is better than catalyst B" 2

Open-Source Availability

In a move to promote wider adoption and further development, the PNNL team has made their method publicly available on GitHub as part of the SNAP repository. This allows other researchers to apply the uncertainty quantification method to their own work, potentially accelerating advancements across various scientific disciplines 12.

As AI continues to play an increasingly important role in scientific discovery, methods like SNAP that provide a measure of uncertainty and reliability will be crucial in building trust and confidence in AI-driven research outcomes.

Explore today's top stories

Elon Musk's xAI Open-Sources Grok 2.5, Promises Grok 3 Release in Six Months

Elon Musk's AI company xAI has open-sourced the Grok 2.5 model on Hugging Face, making it available for developers to access and explore. Musk also announced plans to open-source Grok 3 in about six months, signaling a commitment to transparency and innovation in AI development.

TechCrunch logoengadget logoDataconomy logo

7 Sources

Technology

22 hrs ago

Elon Musk's xAI Open-Sources Grok 2.5, Promises Grok 3

Nvidia Unveils Plans for Light-Based GPU Interconnects by 2026, Revolutionizing AI Data Centers

Nvidia announces plans to implement silicon photonics and co-packaged optics for AI GPU communication by 2026, promising higher transfer rates and lower power consumption in next-gen AI data centers.

Tom's Hardware logoDataconomy logo

2 Sources

Technology

6 hrs ago

Nvidia Unveils Plans for Light-Based GPU Interconnects by

Netflix Unveils Generative AI Guidelines for Content Creation

Netflix has released new guidelines for using generative AI in content production, outlining low-risk and high-risk scenarios and emphasizing responsible use while addressing industry concerns.

Mashable logoDataconomy logo

2 Sources

Technology

6 hrs ago

Netflix Unveils Generative AI Guidelines for Content

Breakthrough in Spintronics: Turning Spin Loss into Energy for Ultra-Low-Power AI Chips

Scientists at KIST have developed a new device principle that utilizes "spin loss" as a power source for magnetic control, potentially revolutionizing the field of spintronics and paving the way for ultra-low-power AI chips.

ScienceDaily logonewswise logo

2 Sources

Technology

6 hrs ago

Breakthrough in Spintronics: Turning Spin Loss into Energy

Cloudflare Unveils New Zero Trust Tools for Secure AI Adoption in Enterprises

Cloudflare introduces new features for its Cloudflare One zero-trust platform, aimed at helping organizations securely adopt, build, and deploy generative AI applications while maintaining security and privacy standards.

SiliconANGLE logoMarket Screener logo

2 Sources

Technology

6 hrs ago

Cloudflare Unveils New Zero Trust Tools for Secure AI
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo