Don’t put all your eggs into one basket – a sensible and scientifically sound approach for using machine learning and physical models to project climate risks

by Dr Claire Burke & Kamil Kluza

The buzz

Advancement in technology can be seen at every corner of our lives. The climate intelligence world is no different, with Artificial Intelligence (AI) and machine learning (ML) making a splash on front pages. In reality, AI and ML cover a wide range of computational techniques and algorithms – artificial neural networks being a particularly powerful variety with everyday uses such as facial recognition, interpreting voice commands, and recommending the next movie to watch online. But as many of us have experienced, a neural network doesn’t always perform how we expect or want, with unintended consequences such as the ‘racist’ and ‘sexist’ algorithms. These seemingly foreseeable failures are often not foreseen due to the ‘black-box’ nature of neural networks themselves. They can learn to solve complex problems, but it is impossible to say what the network has actually learned to do, or even to explain why it works. In the context of projecting climate risks, a ‘black box’ could suggest life-saving solutions, or completely fail to recognise an impending catastrophe – and we wouldn’t know which until it was too late.

Whilst AI can help us project climate risks and disentangle complicated physical interactions, AI-related methods alone are not sufficient for reliable and meaningful understanding of the physical risks from climate change. Scientists have historically used physical climate models to understand how our planet works – including projecting physical risks from climate change. If we understand the strengths and limitations of both approaches, we can combine them in a sensible and scientifically sound way to get the best possible understanding of current and future physical risks of climate.

Physical modelling – the digital twin

A physical model is like a digital twin of the Earth or a specific part of the Earth we’re interested in. The model is created with as much information about the planet as possible, and then the laws of physics are applied to simulate what will happen under different conditions. For example, a simple physical flood model could include data representing the physical characteristics of an area such as topography, river flow, surface composition (e.g. grassy fields vs concrete) and rainfall data. We then simulate how and where water will move, allowing us to project flooding severity and frequency. A full climate model will contain vast amounts of data on the current state of the planet’s surface, the oceans and the atmosphere, allowing us to project the likelihoods of almost any natural hazard – both today and in the future, under different carbon emission scenarios.

As digital twin models are complex, they require specialist knowledge to create, interpret and use. Despite this complexity, it’s simple to say how well a physical model performs – just compare its output with the real world. Once validated, you can be confident of the ability of your model in different scenarios. The model can also be updated to reflect any changes in the real world. For example, we can understand how changes in land use, such as a new housing development, could influence flooding in the area. Ideally, we would always use this approach – it’s scientifically sound and the output is always explainable. However, there are challenges to building robust and accurate physical models; they are computationally expensive and require detailed knowledge of the area.  If we have very limited data about conditions on the ground in a region, we may not be able to model that region as accurately. Sometimes different physical elements of our planet interact in complex ways. If our representation of the landscape or the physics isn’t ‘just right’, the model won’t accurately reproduce the real world.

Machine learning – the statistician’s hammer

Rather than creating the ‘digital twin’, ML ingests masses of data to ‘learn’ complex relationships between inputs (rainfall, landscape shape) and outputs (flood extent and severity). This produces a predictive model with minimal human input and requires no physics or climate science expertise to set up or interpret. ML is particularly useful in identifying complex dependencies within a myriad of data - but it’s entirely reliant on the quality of the data used to train it. Without intensive re-sampling and validation of the data and outputs, one cannot say whether a trained ML model truly describes the relationships between variables we want to understand or if it’s modelling an outlier in the data instead.

Whilst it’s relatively straightforward to assess how good a physical model is, the same isn’t true for ML. ML models often suffer from the curse of overfitting - performing very well in one region but being overly specific and not applicable to any other location. As ML isn’t based on physical laws and the inner workings cannot be probed for an explanation, there’s no guarantee that it’ll produce correct or even physically meaningful outputs; and if a model is overfitted to its training region, its projections in a different region could be wrong.

Black Box vs Glass Box

If the physical system is too complex or there’s not enough data available, ML is a sensible approach. In the climate analytics world, ML can be sensibly applied to fill gaps in observational data, or if applied very carefully, to relate large scale weather systems to street-level impacts. In most European countries and places like the USA, there is an abundance of real-world data to build a high-quality digital twin for modelling climate-related physical hazards.

Physical models are more intuitive to understand as they’re based on real-world physical relationships, e.g. if a river flow increases by X, the probability of flooding in this region will rise by Y. The same cannot be said about ML as the final result cannot be attributed to any particular input variable. ML can only reproduce relationships that are present in the data used for training. This data is usually observations of what has occurred in the past e.g. past flood, rainfall events, or land use conditions – as such, ML cannot project what will happen in the future under different conditions. ML comes into its own when used to enhance physical modelling, where the data is scarce or where building a physical model is far too complex.

Best of two worlds

When faced with the reality of projecting physical risk, neither method alone will provide satisfactory answers for all industries. For regulatory and policy-driven solutions, a highly explainable, ‘glass box’ approach is more desirable. Physical models with the addition of ML will help satisfy both regulators, who require transparency, and users, who need detailed information at asset-level scales.

About Dr Claire Burke

Following a PhD in astrophysics, Claire turned her gaze Earthwards to bring cutting edge physics, big data and machine learning techniques to solving real-world problems. Claire has worked on climate change at the Met Office and been a conservation technology consultant for WWF and National Geographic.

She came from outer space to save the world from climate change!

About Kamil Kluza

Econometrical wiz of the unquantifiable with 15 years' experience in stress testing, loss and valuation modelling at tier 1 global institutions. Kamil’s speciality is applying the latest tech and innovation to solve complex problems. He’s also no stranger to backpacking, sailing and a good bottle of red - or a combination of all three!

Enquiries

For enquiries, contact our Commercial Strategy Manager, Alexandre Crépault;

Email: [email protected]