Artificial Intelligence (AI) built for speech is now decoding the language of earthquakes, Nvidia said in a blog post, noting that researchers have repurposed an AI model built for speech recognition to analyse seismic activity, offering new insights into how faults behave before earthquakes. A team at Los Alamos National Laboratory used Meta's Wav2Vec-2.0, a deep-learning AI model originally designed to process human speech, to study seismic signals from Hawaii's 2018 Kilauea volcano collapse. Their research, published in Nature Communications, reveals that faults produce distinct, trackable signals as they shift—similar to how speech consists of recognisable patterns.
Also Read: Nvidia Accelerates AI Integration in Medical Imaging with MONAI
AI Listening to the Earth
"Seismic records are acoustic measurements of waves passing through the solid Earth," said Christopher Johnson, one of the study's lead researchers. "From a signal processing perspective, many similar techniques are applied for both audio and seismic waveform analysis."
By training the AI on continuous seismic waveforms and fine-tuning it with real-world earthquake data, the model decoded complex fault movements in real time—a task where traditional methods, like gradient-boosted trees, often fall short. The project leveraged Nvidia's GPUs to process vast seismic datasets efficiently.
"The AI analysed seismic waveforms and mapped them to real-time ground movement, revealing that faults might 'speak' in patterns resembling human speech," Nvidia said in a post.
Also Read: CES 2025: Nvidia AI Announcements, Launches and Partnerships Across Industries
Can AI Predict Earthquakes?
While the AI showed promise in tracking real-time fault shifts, it was less effective at forecasting future displacement. Attempts to train the model for near-future predictions — essentially, asking it to anticipate a slip event before it happens — yielded inconclusive results. Johnson emphasised that improving prediction would require more diverse training data and physics-based constraints.
"We need to expand the training data to include continuous data from other seismic networks that contain more variations in naturally occurring and anthropogenic signals," he explained.
"So, no, speech-based AI models aren't predicting earthquakes yet. But this research suggests they could one day — if scientists can teach it to listen more carefully," Nvidia concluded.
Also Read: Nvidia and Partners Develop AI Model to Predict Future Glucose Levels in Individuals
Meta's Wav2Vec-2.0
Meta's Wav2Vec-2.0, the successor to Wav2Vec, was released in September 2020. It uses self-supervision and learns from unlabeled training data to enhance speech recognition across numerous languages, dialects, and domains. According to Meta, this model learns basic speech units to tackle self-supervised tasks. It is trained to predict the correct speech unit for masked portions of audio.
"With just one hour of labeled training data, wav2vec 2.0 outperforms the previous state of the art on the 100-hour subset of the LibriSpeech benchmark — using 100 times less labeled data," Meta said at the time of its announcement.