Foreseeing the exact location, time, and magnitude of an earthquake is quite a challenge. The reason for this is the lack of specific patterns based on which the forecasting is done, ultimately resulting in inaccurate predictions. Artificial intelligence (AI)-based techniques are popular for their ability to uncover hidden patterns in data. When it comes to predicting earthquakes, AI models are supposed to produce favorable outcomes.
The key phrase here is “supposed to.” In theory, AI algorithms can foretell that an earthquake or any other natural disaster will happen. In practice, such algorithms are indeed implemented for prediction. However, there is a minor catch – not all earthquakes can be predicted.
Why is earthquake prediction significant?
Earthquakes are natural hazards resulting from the movements of tectonic plates. These movements are caused by the release of massive internal energy. Minor earthquakes of magnitude 5.0 or less can inflict only light damage near the epicenter. However, those with a magnitude of 6.0 or higher can not only do severe infrastructural damage costing billions of dollars but also take massive death tolls.
Forecasting when and where the earthquake might happen could minimize the degree of destruction. A full earthquake prediction procedure should include three types of information: the location and time of the incident, as well as its magnitude. According to the statistics, there have been about 40,000 earthquakes with a magnitude of over 5.0. in the 21st century only. By analyzing data such as location closely, it’s possible to determine specific patterns. They may further provide experts with chances to forecast earthquakes more accurately.
An earthquake is one of the natural phenomena affected by multiple factors. The basic data that are collected include location, precise date and time, and the magnitude of an earthquake. Despite the progress of seismology and technology implemented for gathering new data, sadly, models capable of predicting quakes with absolute certitude hasn’t been developed so far. The key reasons behind this are, as already mentioned, complexity of the phenomenon stirred by the movement of tectonic plates and numerous causes that can disturb the movement and subsidence of the earth.
Mladen Jovanović, Head of Data Science at Entel
We can classify the earthquake anticipation process into two categories: short-term and long-term. Short-term forecasting is extremely complicated since it forecasts earthquakes within weeks or days before they happen. Being short-term, such predictions should be more accurate, with fewer false alarm occurrences, and are employed for evacuation of an area before an earthquake.
On the flip side, long-term predictions are done according to earthquake periodical instances. Due to this, they carry less information that their short-term counterpart. However, they can assist in setting standards for developing the code and creating disaster response plans.
Back in 2009, a 5.9-magnitude earthquake hit the city of L’Anquila in Italy in 2009, during which 308 citizens were killed. The catastrophe could have been prevented if the Italian earthquake forecast commission hadn’t predicted there would be no damage whatsoever. Based on their faulty forecast, the city was not evacuated. This resulted in a massive massacre that took away numerous lives and damaged multiple infrastructures. As a consequence, scientists responsible for the earthquake forecasts were punished with six years of imprisonment.
What do we need for earthquake prediction?
Earthquake forecast models function pretty well with low or medium-magnitude earthquakes. In the case of higher-magnitude earthquakes, though, the results turn out to be poor. This precisely is of great concern, as it’s the earthquakes with high magnitudes that inflict massive and irreparable damage. The reason behind this is pretty logical. There are fewer high-magnitude earthquakes compared to those not-so-powerful ones, so the absence of substantial data makes predictions difficult.
The study of predictions employs historical data that includes the energy, depth, location, and magnitude of an earthquake. According to the magnitude of completeness value, earthquake parameters specific to the area are calculated. And this is where AI models kick in.
Machine learning (ML) algorithms calculate seismic indicators – an earthquake’s energy, time lag, mean magnitude, etc. Models based on deep learning (DL), on the other hand, calculate a multitude of sophisticated features. Given the fact that both machine and deep learning models are data-driven, and massive earthquakes occur in few instances, anticipating them according to historical data can be quite a challenge.
It’s imperative to cognize the responsibility for development of such a system in the future, as well as the errors that may happen during prediction. Such an occurrence is when an earthquake has been predicted for one region but eventually nothing happened, and situations when there was no warning or detection by the system, but a terrible disaster occurred. By increasing sensitivity of the system, false alarm threshold is also raised. Certainly, with the advance of AI capacities and cooperation with earthquake prediction experts, we hope to see progress in this field soon.
Mladen Jovanović, Head of Data Science at Entel
One way to successfully predict earthquakes is to uncover some precursors of a massive earthquake. Precursors stand for the changes in natural elements before an earthquake occurs. To illustrate, scientists suggest that weird cloud formation, variations in the earth’s electromagnetic fields, humidity, crustal change, soil temperature, and concentration of Radon gas, among others, could be potential candidate precursors.
However, it’s necessary to be cautious here as generalizations like this can be misleading. Namely, similar precursors may appear without a hint of an earthquake. Sardonically, there were incidents during which there were no precursors in sight. Hence, it’s not possible to claim that such precursors are definite evidence of earthquake forecasting. For this reason, involving AI-based methods is critical for anticipating earthquakes.
To evaluate the earthquake prediction method, it’s vital to include specific metrics. Those are sensitivity and specificity, false alarm rate, and accuracy, to name only a few. The catch here is that earthquake models depend on the area in which the data are gathered. For this reason, it’s necessary to establish a standard dataset of an earthquake according to which researchers can calculate the evaluation metrics and then compare models with previous studies.
Artificial intelligence algorithms
Artificial intelligence is a vast concept, much broader than those we laymen are familiar with: chatbots, ChatGPT, NLP, NLU, etc. The entire notion of AI encompasses a whole bunch of algorithms, such as:
- Rule-based approaches, including fuzzy logic and fuzzy neural network;
- Shallow machine learning, involving support vector machine (SVM), support vector regression (SVR), k-nearest neighbor (KNN) algorithm, random forest (RF) algorithm, decision-tree algorithm, k-means clustering, hierarchical clustering, artificial neural network (ANN), radial basis function neural network (RBFNN), and probabilistic neural network (PNN);
- Deep machine learning, encompassing deep neural network (DNN), recurrent neural network (RNN), long short-term memory (LSTM);
Without going into much detail about how these algorithms, each of them gives specific results when it comes to predicting quakes. Yet, with specific problems like poor data collection practices and therefore its scarcity, absence of patterns, and variable performance of the same model in various geological settings, earthquake research is extremely challenging. Consequently, it may and does affect the model’s performance.
Limitations of AI in earthquake forecasting
As already mentioned, forecasting earthquakes with magnitudes below 5 is not a big deal. However, earthquakes exceeding a magnitude of 6 are a challenge, mainly because they happen infrequently. Due to the lack of sufficient data, the AI-based models function poorly when an earthquake with a magnitude over 6 occurs.
To make them work, it would be necessary to train the model individually using earthquake events of a magnitude larger than 6. Again, we end up in a vicious circle – it’s impossible to feed the model with data, as a sufficient amount hasn’t been generated yet. Since earthquakes rarely happen in predictable patterns, it is often difficult to accurately anticipate when they will take place. The typical range of inaccuracy for long-term earthquake time prediction is 20 days to 5 months.
The area of the earth’s surface just above the earthquake’s place of generation is called the epicenter. This is the point that undergoes the greatest shaking causing the most severe damage. But when it comes to earthquake epicenter forecasting, there is typically a 70-mile prediction error. Consequently, it is quite difficult to anticipate when an earthquake will occur.
So far, it has been acknowledged that artificial intelligence can predict minor earthquakes. But this has never been the goal of earthquake forecasting. Instead, it is to anticipate sudden, calamitous quakes that are a threat to life and limb. In the case of AI, this is an apparent paradox – the most destructive earthquakes that scientists and seismologists would love to foresee are the rarest. Again, we get to the issue of how AI algorithms can and will they ever obtain enough training data to predict such earthquakes successfully.
Source: County News Center
But is gathering such data necessary? The scientists at the Los Alamos laboratory believe it’s not. Namely, some studies point out indicate that seismic patterns preceding minor earthquakes are similar in terms of statistics to those of their greater counterparts. It’s just a question of time when tens of smaller earthquakes might occur on only one fault.
A machine that is trained on myriads of such small quakes could be versatile enough to foretell the major ones. AI machine learning algorithms could be also trained on simulations of fast earthquakes and thus serve as proxies for actual data.
That bloody randomness
Even if it becomes possible to predict catastrophic earthquakes based on the data collected from the small one, researchers will encounter another harsh truth. Physical processes driving a fault to the verge of an earthquake might be predictable. However, the actual trigger of a quake – the growth of tiny seismic tremors into a full fault rupture, as scientists believe, still contains some elements of randomness. This implies that, notwithstanding how well a machine is trained, it won’t be able to anticipate earthquakes as is the case with other natural hazards.
Ideally, foretelling major earthquakes will have time limits of weeks, months, or even years. These predictions aren’t likely to be implemented to organize a mass evacuation the night before the quake. Nevertheless, they could increase the level of preparedness in the communities, allow officials to backfit unsafe buildings, and thus alleviate hazards of cataclysmic earthquakes.
Ultimately, this is precisely the objective of earthquake predictions – to issue a warning about possibly damaging quakes early enough, enable a proper response to the catastrophe, and minimize death tolls, casualties, and property damage. Mladen Jovanović, Head of Data Science at Entelu believes that the advance of AI capacities and cooperation with earthquake prediction experts, it would be possible in the near future.