Accounting for offbeat earthquakes could improve forecasts
A new model considers the full history of earthquakes on a fault, improving forecasts of when the next will strike.
It’s one of the toughest questions seismologists face: When will the next big earthquake occur? Although seismologists are not able to predict earthquakes, they can make forecasts showing the probability of one happening in a given area.
These forecasts are a key line of defense against earthquake damage—helping to identify at-risk areas and encourage building safety—so seismologists are continually working to improve their accuracy.
In the latest upgrade, seismologists have a model that can better anticipate large quakes on faults that rupture at irregular intervals. The innovation, said James Neely, a seismologist with the University of Chicago and lead author of the study, “brings us closer to forecasting earthquakes that don’t follow a regular pattern.”
The Past Is Prologue
Earthquake forecasts are based on the occurrence of large earthquakes in the past, using evidence from historical archives or imprints left in the geological record to build catalogs extending back centuries or thousands of years. The average time between past earthquakes, known as the recurrence interval, is then used in models to determine the likelihood of another quake occurring.
But these forecasts assume that earthquakes occur at regular intervals when, in fact, they are more sporadic, Neely explained. Sometimes they strike in clusters, separated by just years or decades, and at other times a fault can be quiet for centuries. “Earthquakes are almost like an unreliable bus, sometimes turning up sooner or later than expected,” he said.
Field geologists have known for some time that large quakes do not happen like clockwork, said Ashley Streig, an earthquake geologist at Portland State University who was not involved in the study. “But we’ve not had a model to describe that.”
Taking the Strain
The new model, developed by Northwestern University geologists and statisticians, mimics earthquake behavior by factoring in the timing and order of previous quakes on a fault. This allows researchers to estimate how much the surrounding rocks have deformed along a fault over time. This buildup of what is known as strain influences whether earthquakes arrive ahead of schedule.
Since the devastating 1906 San Francisco earthquake, seismologists have supposed that slow movements along a fault cause strain to build up, all of which is released in a big earthquake.
If that were the case, then earthquake occurrences along faults across the world should fit this cyclical framework, said study coauthor Leah Salditch, an earthquake geophysicist with the U.S. Geological Survey (USGS). “But they don’t, and that implies there is a residual pool of strain on the fault which we haven’t been representing.”
Taking the order of past earthquakes into account is key, Salditch said. “An earthquake that follows a long period of quiet may not shake out all the strain that has built up. And that will raise the probability of another quake happening.”
“I’ve not seen an earthquake recurrence model of this computational complexity before,” said Glenn Biasi, a USGS seismologist who was not involved in the study. “It represents clear progress for considering strain in forecasting,” he added.
Testing the Model
To test their model, the researchers applied it to the southern San Andreas Fault, which has a history of irregularly timed earthquakes. Along the Mojave section of the fault, earthquakes strike roughly every 135 years, but a closer look revealed a more offbeat schedule.
The most recent major earthquake was in 1857—just 45 years after another in 1812. Previously, however, there had been 300 years of quiet along the fault section. “That unusual pattern got us thinking—was there some memory on the fault?” Neely said.
The new model replicated the short time interval between the 1812 and 1857 earthquakes. Earlier models gave a lower probability of the two earthquakes striking so close in time.
The authors cautioned that there will never be a single best forecast for a given fault. Some faults do rupture at a more consistent time interval, and the researchers found that their model couldn’t replicate that pattern so well.
“The most conservative approach when considering earthquake risk and preparedness is to use a range of models in conjunction,” according to Salditch. Streig agreed: “This is a great new tool to add to our forecasting arsenal, but we shouldn’t rule out certain models—they all have their strengths depending on the fault type and data available on past seismicity.”
The U.S. National Seismic Hazard Model, which is used by government, industry, and insurers to assess risk, already uses multiple methods to consider a range of earthquake scenarios. It makes sense to include some representation of earthquake irregularities in future iterations of these forecasts, according to the study authors.