
Earth science modelling: the disaster data dilemma
Image credit: Getty Images
We have the data. We have the technology. We have the expertise. Why is it still so difficult to forecast the future of our planet? We look at the benefits – and shortcomings – of Earth science modelling.
When a 15m wave struck reactors at Fukushima’s nuclear plant in 2011, three workers lost their lives. In the years that followed, the death toll rose to 1,650. Almost a quarter of a million people fled their homes, 120,000 buildings were destroyed and a 19-mile radiation zone was enforced – all at a cost of 11 trillion yen (£77bn).
A cost that could have been avoided for a disaster that should never have happened. Since the first rudimentary calculations were made some eight decades ago, climate, and natural hazard modelling – which both fall under the umbrella of Earth science – have become big business. Beyond being used by environmental policy-makers to establish the rate at which greenhouse gases are rising, for example, or how seas are salinating, modelling Earth events and climate change is now a substantial area of corporate risk management. Businesses of all sizes need to know where to source products from, how to secure their IT systems from environmental damage, and where to site factories and power plants.
It’s such a big business that the climate services market alone is estimated to be worth $24bn (£17bn) globally and rising fast.
During the 1960s, when the Tokyo Electric Power Company (Tepco) was building Fukushima’s Daiichi and Daini reactors, its engineers used the impact of a tsunami in Chile to model the new plants’ design. The Chilean tsunami, which hit on 22 May 1960, was caused by an earthquake in the region of Valdivia. It was the strongest earthquake ever recorded, at a magnitude of 9.4-9.6, and was so powerful that Japan was struck by 10m-high waves despite being more than 17,000km away. The Daiichi plant was therefore built 10m above sea level; the Daini positioned at 13m, with seawater intake pumps at both sites located around 4m high.
The maximum height of the 2011 tsunami isn’t known because the sea-level gauge at the plant was destroyed. Tepco and the Japan Society of Civil Engineers estimate it approached land at a height of 13.1m before reaching a staggering 14-17m above sea level as it swung up the slope towards the plant: more than 40 per cent higher than the waves created by the Chilean tsunami.
Yet there was data to suggest such a disaster was a near-on certainty. Advances in tsunami and earthquake modelling made years before the Fukushima disaster had placed the probability of a magnitude 7.5 earthquake or greater hitting the region at 99 per cent – the 2011 earthquake registered at 9.1. In 2008, simulations made by in-house engineers warned that Daiichi was at risk of being hit by a monster wave in the near future.
Mark Willacy, author of ‘Fukushima’, explains: “Using simulations, [Tepco engineers] calculated that a tsunami as high as 15.7m could slam into the Fukushima Daiichi Nuclear Plant. But Tepco’s top brass, including the deputy head of the nuclear division, Sakae Muto, shelved the findings.”
Willacy suggests that, far from acting duplicitously, Tepco management questioned the accuracy of the simulations and believed the risk to be an outlier rather than a given.
James Acton, co-director of the nuclear policy programme at the Carnegie Endowment for International Peace, notes: “This significant underestimation, in spite of Japan’s considerable investments in seismology, is a sobering warning against overconfidence in hazard prediction. In recent years, threats due to natural causes have been augmented by threats from sabotage and terrorism. In the future, they > < will include local threats resulting from global climate change.”
Almost a decade since Fukushima, the technology used to determine such threats, particularly on a global level, has advanced significantly. America’s National Oceanic and Atmospheric Administration uses a combination of pressure recorders on the ocean floor with a seismic parameter algorithm and data from simulations to predict the height of tsunami waves and their expected time of impact on land. Officials can access the results in real time.
Nasa’s Earth Science Division (ESD) now uses observations from satellites, instruments on the International Space Station, planes, balloons, ships, and ground sensors, to collect data about land cover and vegetation, ocean currents, ice levels and more. ESD can measure rainfall in remote regions, track dust storms and even plot the movement of mosquitoes.
Google’s Earth Engine, which similarly combines vast amounts of data from satellites and sensors in a public archive, has helped scientists in the Global Health Group at the University of California, San Francisco to predict malaria outbreaks. This archive was more recently used with data from the European Space Agency’s Sentinel-5P satellite to map the movement of people, and the drop in pollution resulting from coronavirus restrictions.
Earth science modelling can even be used to assess the probability of conflict. For example, a 2014 study suggested that the 2011 civil war in Syria was fuelled in part by droughts and the lack of fresh water having an impact on the region’s economy, which led to unrest.
This vast increase in datasets has been matched by advanced computing power, and AI and machine learning are regularly cited as being the silver bullet for solving all of Earth’s wicked problems. Yet the market remains fragmented and flawed.
Unlike weather forecasts, which are based on readings from a vast network of macro-level sensor readings to describe, in detail, how conditions are set to change over the coming minutes, hours, days or months, climate and natural hazard models are largely and broadly probabilistic, and higher-level.
Based on global patterns, these forecasting models give the likelihood of an area being more generally warmer, cooler, wetter or drier in the near and far future. They predict the probability of a tsunami, for instance, striking near a Japanese power plant, or they determine earthquake magnitudes using statistics by comparing what these regions are like today with how they’ve fared historically, to varying degrees of accuracy.
We say with varying degrees of accuracy because the size of the dataset, for example, can impact the outcome. The type of Earth science being applied can shift the dial. The relevance of records can hinder the results, and the motivation behind the analysis – perhaps political or financial – can skew the conclusion. Plus, given that many of these systems apply a global model to local changes, probabilities for the same event can fluctuate wildly. There isn’t an all-encompassing database that accounts for all eventualities.
To demonstrate this variation, a report called ‘High resolution climate-hydrology scenarios for San Francisco’s Bay Area’ recently analysed 18 different climate models simulating how the climate of the Bay will change by 2077. Temperature change values ranged from a 1°C to a 6°C increase, whereas precipitation levels ranged as wildly as a 20 per cent decrease on one side to a 40 per cent increase on the other, depending on which model was used.
Iggy Bassi, co-founder and CEO of climate detection platform Cervest, attributes this to a disconnect between the disciplines involved. “Our changing climate and the increasing frequency of extreme events is redefining the way businesses, governments and land managers operate – both tactically and strategically. Many companies and organisations see this as an ‘engineering challenge’ rather than a ‘scientific challenge’. While large sets of data and scientific disciplines have some answers, their complexity is too vast for humans to decode alone. Yet placing the emphasis purely on AI won’t work. In reality, the most successful approach will be one that simultaneously ties together the interconnected elements that contribute to our planet’s increasingly volatile climate.”
Combining satellite imagery with probability theory and human expertise, Cervest uses machine learning to pool information from multiple datasets taken from different scientific and operational disciplines. It then uses algorithms coupled with Cervest’s team of experts from more than 30 universities to extract ‘signals’, or early-warning signs, of extreme events such as floods, fires, and strong winds. These signals can even extend to the nuanced differences in soil health between two fields, or to identify water risk. All of these signals are combined with financial data predictions to produce models detailed enough to make recommendations for decision-makers.
These models could, in theory, reveal to a multinational the best place to build a new factory, or it could warn a leading wheat grower that their crop yield isn’t expected to meet its targets next year, helping them to better manage their reserves now.
Cervest isn’t the only platform taking this approach. Descartes Labs, based in New Mexico, is using data from global sensors and satellite images to produce a ‘digital twin’ of Earth on which to run various simulations.
Both these approaches have the potential to come into their own in the wake of the Covid-19 outbreak. As soon as the pandemic took hold, scientists from various disciplines and countries joined forces in ways that haven’t been seen before, united against a common enemy that acts a global scale.
We’ve developed the ability to model Earth science events, natural hazards, and climate change, but for these models to be truly useful, there must be greater collaboration, Bassi believes. “Events [last] year have highlighted the importance of preparation, the ability to pivot strategies and adequately assess risk. If we have learnt anything from Covid-19 it’s that inherently reactive practices and regulations have no value in 2020 and beyond. There’s nothing sustainable about waiting for a disaster to hit.”
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.