Catastrophe models: The good, the bad and the ugly

Source(s): Ambiental Technical Solutions Ltd

By Matthew Farnham

The catastrophe modelling community is set to convene at the OASIS LMF in London on the 13-14th September, with the event strapline being “The good, the bad and the ugly”. Ambiental’s Flood Catastrophe Modelling experts will be attending this event, so in alignment with that theme this article discuss the main benefits (good), the known limitations (bad), and the unknown uncertainty (ugly) that can be derived from a catastrophe model.

Catastrophe Models are primarily used in the (re)insurance industry to derive information about the value of their exposed assets. They are expressed in statistical curves designed to answer the questions regarding how much financial reserves the company should have annually and the level of reinsurance necessary to remain solvent. Additionally, it can help optimise portfolios and make better decisions according to the strategic and operational objectives.

The good

A catastrophe model is a computerised system that generates a robust set of simulated events. It estimates the magnitude, intensity and location of the event to determine the amount of damage, before calculating the amount of insured loss as a result of each catastrophic event. Catastrophe models are the best method to currently visualise catastrophic risk. These models have been developed through decade-long processes of combining the various components of Hazard, vulnerability, exposure and loss geospatially. The financial component (or loss) is undoubtedly the most use to insurers, which can provide them with valuable insights, on which important decisions can be made.

The good derived from catastrophe models cannot be understated. These models will ultimately shape the society of tomorrow through rationally derived insurance premiums based on the best science of the peril risk available. The hope is that with access to better and more reliable information, decision-makers in a number of industries (building developers, entrepreneurs, government bodies, urban planners etc.) will be able to make informed decisions to help protect societies, mitigate the impact, reduce the risk and increase the resilience of communities against a peril which would otherwise prove catastrophic.

Ambiental is harnessing the power of catastrophe modelling with its innovative FloodCatTM, which analyses flood risk at property-level geographic resolution. FloodCat uses a unique proprietary probabilistic approach when generating event footprints, whereby weather events are linked to deterministically generated flood maps. This approach is favoured due to Ambiental’s proven accuracy in hazard mapping and ensures data consistency with Ambiental’s flood map products. Ambiental’s FloodCat has been validated against historic events and claims, which is a proven method to validate a Catastrophe model. Furthermore, various tests and studies have been undertaken to check the model for event set stability and parameter sensitivity.

The bad

Whilst providing valuable insights to allow decision-makers to analyse risk and loss more effectively, it should be noted that catastrophe models have their limitations. They require significant data checking, processing and cleansing of the data to enable the model to represent the peril(s) accurately. The main question to ask your model provider would be from what data has the model been developed from. At each stage, various data inputs are needed to ensure the model represents the risk of a peril accurately, but care needs to be taken to not under burden nor equally overburden the model with information as that can lead to increased uncertainty. This uncertainty is compounded at each subsequent stage of production as the financial component is incumbent on the values expressed in the vulnerability component, which is in turn dependant on how accurate the hazard component is at determining the location of peril.

Flood catastrophe models require a detailed digital elevation model (DEM) as flooding is geospatially dependent. Flood intensity can vary dramatically to responses in variations in terrain. Smaller watercourses may not be modelled in a coarse resolution model. Whereas representing risk from wind storms is not as spatially reliant as flooding because the degree of damage between properties does not differ so steeply. For this reason, it can be argued, that while building multi-peril catastrophe models may look advantageous, single-peril models may represent the risk of any single peril more accurately as the data is tailored to suit the needs and purposes for that peril.

Another factor to consider when using catastrophe models is due to a very sparse historical record, it is necessary to interpolate and extrapolate in order to provide a more spatially complete picture for probabilistic modelling. Additionally, claims data, even if it does cover the timespan required for any meaningful description of risk, the underlying trends within the data might render it unusable, particularly in the case of flood-related risk. Changes in urbanisation, building standards, infrastructure (ie: flood defences) and changes in climatic patterns all contribute to a need to extrapolate beyond what has been historically observed in order get a more accurate picture of extremes.

As specialists in flood forecasting and flood risk analysis, Ambiental has developed a series of products that use best practise methodologies to present the risk of flooding at various return periods and using the most up-to-date and accurate digital elevation models (DEM) available. That is why Ambiental seeks to ensure that the best available input data is used whenever possible, particularly in high value urban areas, through the use of high precision LiDAR terrain data. Ambiental is also advancing the use of climate change scenarios in flood risk assessment which will provide further insights to catastrophe modellers in the future.

The ugly

Catastrophe modellers seek to drive down uncertainty as much as possible but inevitably it must be accepted that uncertainty will always exist where our knowledge is not complete. It is this “ugliness” of catastrophe models which tends to result in disparities between actual claims. The ugliest of which are the dreaded unknown unknowns – things which we do not realise that we do not know, and as such are impossible to quantify.

The uncertainty arising from natural processes can be broadly grouped into two types: aleatory and epistemic uncertainty. Aleatory uncertainty represents the intrinsic variability of a process. For example, there is aleatory uncertainty in the time of occurrence of earthquakes on a fault. Epistemic uncertainty, stems from a lack of understanding of the true behaviour of a process. By way of an example, there are epistemic uncertainties surrounding the behaviour of sea surface temperatures in the near future and their impact on landfall hurricane risk. Cat models attempt to quantify uncertainties in two main ways: by simulating a catalogue of events, for example a catalogue of tropical cyclones, and by measuring the uncertainty surrounding the amount of damage caused by each simulated event.

There can be uncertainty around exposure if we do not know enough about the entity being assessed. For property there may be uncertainty about location, replacement value, construction, occupancy and size. Fortunately, there is currently a lot of work being done by data modellers to build very detailed exposure datasets and there is also an Open Exposure Data (OED) initiative which seeks to provide a comprehensive and consistent way of standardising the parameterisation of exposure models.

Hazard uncertainty occurs when we do not have a full understanding of where, when and to what magnitude a natural disaster will strike. Large events happen infrequently and may not have happened in recorded history. Therefore, hazard modellers need to fully understand the natural processes involved in order to produce accurate models. Model resolution is highly important so that models are not overly simplified or too generalised in their assumptions. High precision is particularly important in flood hazard models as this peril is very location specific.

Arguably one of the biggest sources of uncertainty in cat models is the level of vulnerability a risk will have. Event impact is complex and can be variable. Having good quality claims data can help but past losses are not always equivalent to future losses. This was evident in the Brisbane floods of 2011 where actual losses exceeded predictions. One of the causes of this was outdated vulnerability information on how properties were impacted by flood hazard. An example of this was where modern kitchen materials were observed to be more vulnerable than traditional materials which drove higher losses. Clearly vulnerability assessments must keep pace with lifestyle changes. As such Ambiental’s FloodCat Australia flood vulnerability model has been developed to include the most recent region-specific vulnerability science formed through the consensus of experts for GeoScience Australia in 2015.

Moving the catastrophe modelling industry forward

There is a clear need to promote a discussion around catastrophe modelling; an industry which has become rather niche, to build a consensus of approach. This can help the wider community to be able to access the benefits that can be derived from catastrophe models. Natural Catastrophe Risk Management and Modelling by Hillier, J et al. (2017) is one of the first volumes discussing the science behind catastrophe modelling and its place in the context of natural hazards. It is the first of hopefully many to cater to the ever-increasing demand of specialists, experts and users in the catastrophe modelling industry.

When procuring catastrophe models it is important for users to understand as much as possible about how they were made and any limitations they might have. In the past insurers have had to rely on ‘black box’ cat models where the full details of models and loss quantification platform are not fully known. However, in recent years initiatives like the Oasis Loss Modelling Framework have worked to overcome this problem, whilst modelling platforms like Simplitium ModEx and Aon Benfield ELEMENTS take a transparent approach and increase the availability of multiple models from different vendors. Ambiental’s FloodCat Australia is available through these platforms and we endorse the open and transparent philosophy where model parameters, limitations and methodologies are communicated to the user to ensure that the unknowns are known.

So where does the industry go from here? Better data, interpolation and extrapolation methodologies will help increase the accuracy of flooding and forecasting. Additionally, with a focus on ensuring geospatial accuracy of exposures will limit the degree to which the model portrays false positives and will reflect a more accurate picture of risk geospatially.

Ambiental’s team of catastrophe modelling experts are continuously working on new ways to harness latest innovations in flood risk technology and apply them to our proprietary catastrophe models. We realise the importance of input data accuracy, and as such, partner with leading data partners across the world. Our connections in the industry, as well as our ability to listen to our customers, enables us to keep abreast of latest technological enhancements and transfer these onto our clients. If you’d like to discuss catastrophe modelling or any of our other products or services, please get in touch by clicking here or emailing info@ambientalrisk.com.

Explore further

Hazards Flood
Share this

Please note: Content is displayed as last posted by a PreventionWeb community member or editor. The views expressed therein are not necessarily those of UNDRR, PreventionWeb, or its sponsors. See our terms of use

Is this page useful?

Yes No Report an issue on this page

Thank you. If you have 2 minutes, we would benefit from additional feedback (link opens in a new window).