PNNL Researchers Develop New Method to Measure Uncertainty in AI Model Training

2 Sources

Share

Scientists at Pacific Northwest National Laboratory have created a novel approach to quantify uncertainty in AI model training, particularly for neural network potentials. This method aims to increase trust in AI predictions for material science and chemistry applications.

News article

PNNL Researchers Tackle AI Uncertainty in Materials Science

Researchers at the Department of Energy's Pacific Northwest National Laboratory (PNNL) have developed a groundbreaking method to measure uncertainty in artificial intelligence (AI) model training, specifically for neural network potentials. This advancement aims to bridge the gap between the speed of AI predictions and the trust scientists place in their accuracy, particularly in the fields of materials science and chemistry

1

2

.

The Challenge of AI Reliability

AI models trained on experimental and theoretical data are increasingly being used to predict material properties before physical creation and testing. This approach has the potential to revolutionize the development of medicines and industrial chemicals, significantly reducing the time and cost associated with traditional trial-and-error methods

1

.

However, the reliability of these AI predictions has been a major concern. As PNNL data scientist Jenna Bilbrey Pope notes, "We noticed that some uncertainty models tend to be overconfident, even when the actual error in prediction is high"

1

. This overconfidence is a common issue with deep neural networks and can lead to misplaced trust in AI predictions.

SNAP: A New Approach to Uncertainty Quantification

The PNNL team, led by Bilbrey Pope and Sutanay Choudhury, has introduced a new uncertainty quantification method as part of their Scalable Neural network Atomic Potentials (SNAP) framework. This method provides a metric that mitigates the overconfidence issue common in AI models

2

.

Key features of the SNAP framework include:

  1. Ability to determine how well neural network potentials have been trained
  2. Identification of predictions outside the model's training boundaries
  3. Guidance for active learning to improve the model's performance

    1

Benchmarking with MACE

To validate their method, the researchers benchmarked it against MACE, one of the most advanced foundation models for atomistic materials chemistry. They calculated the model's proficiency in determining the energy of specific material families, providing insights into which simulations can be confidently performed using AI approximations instead of time-intensive supercomputer calculations

1

2

.

Implications for AI in Scientific Research

The development of this uncertainty quantification method has significant implications for the integration of AI into scientific workflows:

  1. Increased trust in AI predictions for materials science and chemistry
  2. Potential for creating autonomous laboratories with AI as a trusted assistant
  3. Ability to provide confidence guarantees for AI predictions, such as "85% confidence that catalyst A is better than catalyst B"

    2

Open-Source Availability

In a move to promote wider adoption and further development, the PNNL team has made their method publicly available on GitHub as part of the SNAP repository. This allows other researchers to apply the uncertainty quantification method to their own work, potentially accelerating advancements across various scientific disciplines

1

2

.

As AI continues to play an increasingly important role in scientific discovery, methods like SNAP that provide a measure of uncertainty and reliability will be crucial in building trust and confidence in AI-driven research outcomes.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo