Curated by THEOUTPOST
On Fri, 7 Feb, 4:01 PM UTC
2 Sources
[1]
AI Designed for Speech Recognition Deciphers Earthquake Signals
Nvidia GPUs enable rapid processing of vast seismic datasets. Artificial Intelligence (AI) built for speech is now decoding the language of earthquakes, Nvidia said in a blog post, noting that researchers have repurposed an AI model built for speech recognition to analyse seismic activity, offering new insights into how faults behave before earthquakes. A team at Los Alamos National Laboratory used Meta's Wav2Vec-2.0, a deep-learning AI model originally designed to process human speech, to study seismic signals from Hawaii's 2018 Kilauea volcano collapse. Their research, published in Nature Communications, reveals that faults produce distinct, trackable signals as they shift -- similar to how speech consists of recognisable patterns. Also Read: Nvidia Accelerates AI Integration in Medical Imaging with MONAI "Seismic records are acoustic measurements of waves passing through the solid Earth," said Christopher Johnson, one of the study's lead researchers. "From a signal processing perspective, many similar techniques are applied for both audio and seismic waveform analysis." By training the AI on continuous seismic waveforms and fine-tuning it with real-world earthquake data, the model decoded complex fault movements in real time -- a task where traditional methods, like gradient-boosted trees, often fall short. The project leveraged Nvidia's GPUs to process vast seismic datasets efficiently. "The AI analysed seismic waveforms and mapped them to real-time ground movement, revealing that faults might 'speak' in patterns resembling human speech," Nvidia said in a post. Also Read: CES 2025: Nvidia AI Announcements, Launches and Partnerships Across Industries While the AI showed promise in tracking real-time fault shifts, it was less effective at forecasting future displacement. Attempts to train the model for near-future predictions -- essentially, asking it to anticipate a slip event before it happens -- yielded inconclusive results. Johnson emphasised that improving prediction would require more diverse training data and physics-based constraints. "We need to expand the training data to include continuous data from other seismic networks that contain more variations in naturally occurring and anthropogenic signals," he explained. "So, no, speech-based AI models aren't predicting earthquakes yet. But this research suggests they could one day -- if scientists can teach it to listen more carefully," Nvidia concluded. Also Read: Nvidia and Partners Develop AI Model to Predict Future Glucose Levels in Individuals Meta's Wav2Vec-2.0, the successor to Wav2Vec, was released in September 2020. It uses self-supervision and learns from unlabeled training data to enhance speech recognition across numerous languages, dialects, and domains. According to Meta, this model learns basic speech units to tackle self-supervised tasks. It is trained to predict the correct speech unit for masked portions of audio. "With just one hour of labeled training data, wav2vec 2.0 outperforms the previous state of the art on the 100-hour subset of the LibriSpeech benchmark -- using 100 times less labeled data," Meta said at the time of its announcement.
[2]
When the Earth Talks, AI Listens
AI built for speech is now decoding the language of earthquakes. A team of researchers from the Earth and environmental sciences division at Los Alamos National Laboratory repurposed Meta's Wav2Vec-2.0, an AI model designed for speech recognition, to analyze seismic signals from Hawaii's 2018 Kīlauea volcano collapse. Their findings, published in Nature Communications, suggest that faults emit distinct signals as they shift -- patterns that AI can now track in real time. While this doesn't mean AI can predict earthquakes, the study marks an important step toward understanding how faults behave before a slip event. "Seismic records are acoustic measurements of waves passing through the solid Earth," said Christopher Johnson, one of the study's lead researchers. "From a signal processing perspective, many similar techniques are applied for both audio and seismic waveform analysis." Big earthquakes don't just shake the ground -- they upend economies. In the past five years, quakes in Japan, Turkey and California have caused tens of billions of dollars in damage and displaced millions of people. That's where AI comes in. Led by Johnson, along with Kun Wang and Paul Johnson, the Los Alamos team tested whether speech-recognition AI could make sense of fault movements -- deciphering the tremors like words in a sentence. To test their approach, the team used data from the dramatic 2018 collapse of Hawaii's Kīlauea caldera, which triggered a series of earthquakes over three months. The AI analyzed seismic waveforms and mapped them to real-time ground movement, revealing that faults might "speak" in patterns resembling human speech. Speech recognition models like Wav2Vec-2.0 are well-suited for this task because they excel at identifying complex, time-series data patterns -- whether involving human speech or the Earth's tremors. The AI model outperformed traditional methods, such as gradient-boosted trees, which struggle with the unpredictable nature of seismic signals. Gradient-boosted trees build multiple decision trees in sequence, refining predictions by correcting previous errors at each step. However, these models struggle with highly variable, continuous signals like seismic waveforms. In contrast, deep learning models like Wav2Vec-2.0 excel at identifying underlying patterns. How AI Was Trained to Listen to the Earth Unlike previous machine learning models that required manually labeled training data, the researchers used a self-supervised learning approach to train Wav2Vec-2.0. The model was pretrained on continuous seismic waveforms and then fine-tuned using real-world data from Kīlauea's collapse sequence. NVIDIA accelerated computing played a crucial role in processing vast amounts of seismic waveform data in parallel. High-performance NVIDIA GPUs accelerated training, enabling the AI to efficiently extract meaningful patterns from continuous seismic signals. What's Still Missing: Can AI Predict Earthquakes? While the AI showed promise in tracking real-time fault shifts, it was less effective at forecasting future displacement. Attempts to train the model for near-future predictions -- essentially, asking it to anticipate a slip event before it happens -- yielded inconclusive results. "We need to expand the training data to include continuous data from other seismic networks that contain more variations in naturally occurring and anthropogenic signals," he explained. A Step Toward Smarter Seismic Monitoring Despite the challenges in forecasting, the results mark an intriguing advancement in earthquake research. This study suggests that AI models designed for speech recognition may be uniquely suited to interpreting the intricate, shifting signals faults generate over time. "This research, as applied to tectonic fault systems, is still in its infancy," Johnson. "The study is more analogous to data from laboratory experiments than large earthquake fault zones, which have much longer recurrence intervals. Extending these efforts to real-world forecasting will require further model development with physics-based constraints." So, no, speech-based AI models aren't predicting earthquakes yet. But this research suggests they could one day -- if scientists can teach it to listen more carefully.
Share
Share
Copy Link
Researchers at Los Alamos National Laboratory have adapted Meta's Wav2Vec-2.0, an AI model for speech recognition, to analyze seismic activity, potentially revolutionizing our understanding of fault behavior before earthquakes.
Researchers at Los Alamos National Laboratory have made a groundbreaking discovery in the field of seismology by repurposing an AI model originally designed for speech recognition to analyze earthquake signals. The team utilized Meta's Wav2Vec-2.0, a deep-learning AI model, to study seismic activity from Hawaii's 2018 Kilauea volcano collapse, revealing that faults produce distinct, trackable signals as they shift ā similar to recognizable patterns in human speech 12.
Christopher Johnson, one of the study's lead researchers, explained the rationale behind using a speech recognition model for seismic analysis: "Seismic records are acoustic measurements of waves passing through the solid Earth. From a signal processing perspective, many similar techniques are applied for both audio and seismic waveform analysis" 1. This innovative approach demonstrates the versatility of AI models and their potential applications across different scientific domains.
The AI model outperformed traditional methods like gradient-boosted trees in analyzing complex, continuous seismic signals. By training the AI on continuous seismic waveforms and fine-tuning it with real-world earthquake data, the model was able to decode complex fault movements in real-time 2. The research team employed a self-supervised learning approach, pretraining the model on continuous seismic waveforms before fine-tuning it with data from the Kilauea collapse sequence 2.
The project leveraged NVIDIA's GPUs to process vast seismic datasets efficiently. High-performance NVIDIA GPUs accelerated the training process, enabling the AI to extract meaningful patterns from continuous seismic signals effectively 12. This technological backbone was crucial in handling the enormous amount of data involved in seismic analysis.
While the AI showed promise in tracking real-time fault shifts, it was less effective at forecasting future displacement. Attempts to train the model for near-future predictions yielded inconclusive results 12. Johnson emphasized the need for more diverse training data and physics-based constraints to improve prediction capabilities: "We need to expand the training data to include continuous data from other seismic networks that contain more variations in naturally occurring and anthropogenic signals" 1.
This study marks a significant advancement in earthquake research, suggesting that AI models designed for speech recognition may be uniquely suited to interpreting the intricate signals generated by faults over time. While the technology is not yet capable of predicting earthquakes, it represents a step towards more sophisticated seismic monitoring systems 2.
The development of more accurate seismic analysis tools could have significant economic implications. In the past five years, earthquakes in Japan, Turkey, and California have caused tens of billions of dollars in damage and displaced millions of people 2. Improved understanding and monitoring of seismic activity could potentially mitigate some of these impacts in the future.
Reference
[2]
Artificial intelligence is transforming geoscience research, with applications in weather forecasting, seismic analysis, and microbiome studies. Experts discuss the benefits and challenges of using AI in their respective fields.
3 Sources
3 Sources
NASA's InSight mission and Mars Reconnaissance Orbiter, aided by AI, have discovered a new impact crater on Mars, challenging previous understanding of the planet's seismic activity and interior structure.
3 Sources
3 Sources
Google's new AI-driven weather prediction model, GraphCast, outperforms traditional forecasting methods, promising more accurate and efficient weather predictions. This breakthrough could transform meteorology and climate science.
7 Sources
7 Sources
Nvidia introduces CorrDiff, a generative AI model that enhances local weather forecasting by providing high-resolution predictions at lower costs and faster speeds than traditional methods.
3 Sources
3 Sources
Nvidia introduces Fugatto, an advanced AI model capable of generating and transforming various types of audio, including music, voices, and sound effects. This innovative technology promises to revolutionize audio production across multiple industries.
24 Sources
24 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
Ā© 2025 TheOutpost.AI All rights reserved