AI Competition Aims to Optimize Data Center Operations for Scientific Research

2 Sources

Share

Researchers at Jefferson Lab are using AI models in a daily competition to improve data center efficiency and reduce costs for large-scale scientific experiments.

News article

AI Models Compete to Optimize Data Center Operations

Researchers at the U.S. Department of Energy's Thomas Jefferson National Accelerator Facility are pioneering an innovative approach to data center management using artificial intelligence (AI). The project, dubbed DIDACT (Digital Data Center Twin), aims to enhance the reliability and cost-effectiveness of high-performance computing systems crucial for scientific research

1

2

.

The Challenge of Big Science Data

At Jefferson Lab, the Continuous Electron Beam Accelerator Facility (CEBAF) generates massive amounts of data - tens of petabytes annually - from particle physics experiments. This data deluge, equivalent to filling a laptop's hard drive every minute, requires robust computing infrastructure for processing and analysis

1

2

.

AI-Driven Solution: DIDACT

DIDACT employs machine learning models, specifically artificial neural networks, to monitor and predict the behavior of scientific computing clusters. These models compete in a daily contest to detect anomalies and optimize system performance

1

2

.

Bryan Hess, Jefferson Lab's scientific computing operations manager, explains, "We're trying to understand characteristics of our computing clusters that we haven't seen before. It's looking at the data center in a more holistic way"

1

2

.

The Competition Framework

Unlike traditional AI training methods, DIDACT uses a continual learning approach. Multiple models, including variations of unsupervised neural networks called autoencoders, are trained on incrementally arriving data. Each day, a new "champion model" is crowned based on its ability to learn from fresh data and detect anomalies with the lowest error rate

1

2

.

Diana McSpadden, a Jefferson Lab data scientist, describes the process: "They compete using known data to determine which had lower error. Whichever won that day would be the 'daily champion'"

1

2

.

Sandbox: The AI Runway

To avoid disrupting ongoing scientific computations, the team developed a testbed cluster called the "sandbox." This environment serves as a runway where AI models can be trained and evaluated without impacting day-to-day operations

1

2

.

Potential Impact on Scientific Research

The DIDACT system has the potential to significantly reduce downtime in data centers and optimize critical resources. By automating the detection of anomalies and potential issues, it allows system administrators to take proactive measures, ultimately lowering costs and improving scientific productivity

1

2

.

Ahmed Hossam Mohammed, a postdoctoral researcher at Jefferson Lab, highlights the importance of this automation: "When compute clusters get bigger, it becomes tough for system administrators to keep track of all the components that might go bad. We wanted to automate this process with a model that flashes a red light whenever something weird happens"

1

2

.

Recognition and Future Prospects

The project has gained recognition in the scientific community, recently featured in IEEE Software as part of a special edition on machine learning in data center operations (MLOps)

1

2

. As large-scale scientific instruments continue to generate ever-increasing volumes of data, AI-driven management systems like DIDACT may become essential tools for maintaining efficient and cost-effective research infrastructure.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo