Methodology
Learn more about the methodology and the models used to create the data visualised on this dashboard.Terrestrial climate
Models
FaIR
FaIR is a climate model emulator, replicating the warming behaviour of more complex climate models without the computational complexity. It simplifies the Earth system to consist of four locations that each store carbon, which may be interpreted as the Earth's crust, the atmosphere and surface, the ocean mixing layer and the deep ocean. The levels of carbon in each layer are influenced by emissions, both human and natural. FaIR then calculates how these emissions translate into concentrations of greenhouse gas and aerosols using simple relationships, and combines them with estimates of other climate change drivers (such as cloud formation from aviation, solar variability and land use reflectiveness changes) to give a model of the total strength of the forcing imposed on the climate system. This is then used to calculate the change in the average Earth's temperature. The outputs are constrained to match both historic warming and the expected levels of future warming as set out by the Intergovernmental Panel on Climate Change (cross-chapter box 7 of the WG1 contribution to the Sixth Assessment Report, in chapter 5). We used FaIR version 1.6.4. For more details see Smith et al. 2018.
MESMER
The Modular Earth System Model Emulator with spatially Resolved output (MESMER) is a statistical model that emulates the possible evolution of key climate variables over land areas for a given Global Mean Temperature (GMT) trajectory, by mimicking the behaviour from more computationally expensive Earth System Models (ESMs). Calibrated on existing scenario projections produced by ESMs, it combines two modules representing either the local forced response of the climate to increasing GMT or a statistical representation of the natural climate variability. These two aspects constitute a set of traits specific to each ESM, that can be captured by MESMER by calibrating on data from each ESM (i.e., ESM configuration) individually.
Its simplicity and flexibility allows it to produce robust estimates of future possible climate outcomes within minutes, even for scenarios that haven't been represented by ESMs. The current MESMER analyses are based on Beusch et al. (2020) and Beusch et al. (2022); for a full description of the data see Schwaab et al. (in preparation). For further information see here.
MESMER-M
MESMER-M is the monthly downscaling module of MESMER, i.e. it produces results at monthly resolution based on MESMER results. It starts off by disaggregating the annual-level outputs of MESMER into their seasonal cycle, and further represents the effects of any remaining monthly natural climate variability. Similar to MESMER, MESMER-M is trained to produce scenario projections mimicking ESM-specific responses (i.e., ESM configurations) by calibrating on projection data from each ESM individually, The current MESMER-M analyses are based on Nath et al. (2022).
MESMER-X
MESMER-X is a statistical model based on MESMER used to emulate key extreme climate variables over land areas for given GMT trajectories. Similarly as MESMER and MESMER-M, MESMER-X can be run in different configurations that each reproduce the response of a different ESM. Currently MESMER-X is able to represent local annual maximum temperatures (TXx, Quilcaille et al, 2022), annual average soil moisture, annual minimum of the monthly average soil moisture, and various indices related to the fire weather (see Quilcaille et al. 2023 and Quilcaille et al. 2023 for more information). MESMER-X relies on Generalised Extreme Value theory to represent the local response of these variables to rising GMT, and is also able to explore how they are further affected by natural climate variability. MESMER-X is calibrated on existing scenario projections from ESMs, to then reproduce the behaviour of key extreme climate variables under scenarios not generated by ESMs yet. Similar to MESMER and MESMER-M, it is calibrated on projection data from each ESM individually. The current MESMER-X analyses are based on Quilcaille et al. (2022) and Quilcaille et al. (2023).
Model simulations
Selection of the Global Mean Temperature forcing
FaIR (see description of the FaIR emulator above) is used to represent changes in Global Mean Temperature (GMT) under different emission scenarios as well as the uncertainty due to climate sensitivity within each scenario. As an Earth System Model emulator, it can generate thousands of simulations (also called emulations or realisations) that all represent plausible evolutions of GMT in response to a given emission scenario in little time. 100 FaIR emulations that are representative of the range of GMT responses for each scenario are then selected. Each of them is used to drive MESMER and MESMER-X in each of their model configurations (between 14 and 25 depending on the indicator). It should be noted that MESMER, MESMER-X and MESMER-M are calibrated on each ESM individually.
MESMER emulations
First, for each scenario and each of the 25 configurations of MESMER, the MESMER module simulating the response of local annual mean temperature to the GMT forcing is used to generate one single forced response of MESMER to each of the selected 100 FaIR emulations (i.e., 100 forced responses are obtained once this step is complete). Then, for each of these 100 runs, the MESMER module simulating the local variability of annual mean temperature is run 10 times to generate 1000 manifestations of stochastic variability. Because this second module is independent of Global Mean Temperature, each of the 1000 manifestations of variability can be combined with each of the 100 forced responses. Therefore, in theory 100x1000 possible evolutions of the spatially resolved fields of annual mean temperature from 1850 to 2100 can be obtained for each scenario and each of the 25 model configurations.
Because they sample the effect of natural variability, the 1000 realisations are used to calculate the temperature values during extreme events for specific return periods (see section Data Processing). For the calculation of the changes in Mean Temperature however, the amount of data is first reduced by selecting 10 representative manifestations of natural variability out of the 1000 generated with MESMER, which can be combined with each MESMER configuration and FaIR emulation. This results in 25x100x10 = 25,000 MESMER emulations for each scenario. This procedure is described in more detail in Schwaab et al. (in prep.).
MESMER-X emulations
For each scenario and ESM configuration, each of the selected FaIR emulations are used to drive MESMER-X by generating each time 10 emulations corresponding to local, annual climate impacts in response to the same GMT forcing but with different manifestations of natural climate variability. This results in 100x10=1000 MESMER-X emulations for each scenario and model configuration.
MESMER-M
The MESMER-M runs were derived by driving MESMER-M with the output of MESMER simulations that were obtained from the same 100 forced responses simulated with the corresponding MESMER module, following the procedure described above (i.e., 1 forced response per FaIR emulation). However, only 100 representative manifestations of natural variability generated with MESMER were selected out of 1000 available for each scenario and model configuration. They were then combined one on one with the 100 forced responses, so as to obtain 100 MESMER runs. Each of those was then used to derive one MESMER-M run, overall resulting in 100 MESMER-M runs for each scenario and model configuration.
Data processing
Time series plots
The climate risk dashboard includes time series plots showing the median evolution of each indicator over time for each country and for each scenario. To obtain these time series, results from each simulation obtained with each model configuration were first spatially averaged over the country areas. Data were then extracted for each timestep visualised in the tool (from 2020 till 2100 or 2300 depending on the scenario), as well as each reference period (2011-2020 and 1850-1900), from which the changes between each timestep and each reference period were calculated.
For the indicators quantifying changes in soil moisture, fire weather or Annual maximum temperature (obtained with MESMER-X) and Mean temperature (obtained with MESMER), the median projections (visualised with a thick line) were calculated by identifying the 50th percentile across all obtained realisations (1000 times the number of model configurations for each scenario), while the confidence interval was calculated by identifying the 5th and 95th percentiles across those. Changes in indicators quantifying changes in soil moisture are expressed in % relative to values simulated over the selected reference period.
For the Extremely cold year and Extremely hot year indicators, in order to calculate the results for their 1-in-10-year, 1-in-20-year and 1-in-50-year events their 2nd, 5th, 10th, 90th, 95th, and 98th percentiles were first identified from the ensemble of 1000 MESMER realisations of variability, for each of the 25 model configurations. These percentiles were then combined with the 100 MESMER forced responses corresponding to each model configuration. 25*100 values are thus obtained for each of the 6 indicators (1-in-10-year, 1-in-20-year and 1-in-50-year Extremely cold and hot years). The mean projections (visualised with a thick line) were calculated by identifying the 50th percentile across this ensemble, while the confidence interval was calculated by identifying the 5th and 95th percentiles across those.
Maps
The climate risk dashboard also includes maps showing projected median changes in each indicator for each country and for each scenario. These are obtained by following a similar procedure as for the time series plots, except that results are not averaged over the country area but that percentiles are identified for each model grid cell individually.
Unavoidable risk graphs
For most indicators of the Terrestrial Climate sector, the climate risk dashboard includes graphs describing the evolution of the probability (risk) that the indicator of interest averaged over a country exceeds specific levels of increase or decrease between today and 2030, 2050, 2100, as well as 2200 and 2300 if applicable.
For the Annual maximum temperature and Mean temperature indicators, these levels can be selected by users and correspond to predefined increases in that indicator (for example +0.5°C or +1°C), compared to the levels of the present-day (2011-2020) or pre-industrial (1850-1900) reference period. For each country, the indicator values over the reference periods were first identified by calculating the 50th percentiles of the values simulated for those reference periods of across all MESMER-X realisations or the representative MESMER ensemble members (100x10 times the number of model configurations), using the same procedure as for the time series plots. Then, the risk that these specific values are exceeded in each scenario and year of interest is calculated by looking at the share of MESMER-X realisations or representative MESMER ensemble members exceeding those values in the given year.
For the Extremely cold year and Extremely hot year indicators, the reference values are defined by the event frequency of interest (1-in-10 year, 1-in 20-year, 1-in-50-year for both indicators). For example, the reference value for the 1-in-20-year Extremely hot year in 2011-2020 over a specific country corresponds to the 95th percentile across the country average values derived from the representative MESMER ensemble members. For each frequency, the corresponding percentile is identified for each reference period, scenario and country. Then, the probability that these percentiles are exceeded at the timesteps of interest (2030, 2050, 2100, as well as 2200 and 2300 if applicable) is calculated by looking at the share of the country average values derived from the representative MESMER ensemble members that exceeds them.
Urban heat stress
Models
UrbClim
The simulations of urban climate are performed with the urban boundary layer climate model UrbClim, designed to cover individual cities and their nearby surroundings at a very high spatial resolution. UrbClim consists of a land surface scheme containing simplified urban physics, coupled to a 3-D atmospheric boundary layer module. The latter is tied to synoptic-scale meteorological fields through the lateral and top boundary conditions, to ensure that the synoptic forcing is properly considered. The land surface scheme is based on the soil--vegetation--atmosphere transfer scheme of De Ridder and Schayes (1997) but is extended to account for urban surface physics. The main advantage of the UrbClim model is its high execution speed, while maintaining accurate results (about two orders of magnitude faster than full scale mesoscale models; García-Díez et al., 2016). A complete description of the UrbClim model can be found in De Ridder et al. (2015).
CLIMADA
The heat impact metrics (i.e. ‘Population exposed to heatwaves’ and ‘Population exposed to moderate/high/very high/extreme heatstress’) are computed using the open-sourced and open-access natural hazard risk model CLIMADA (CLImate ADAptation). According to the IPCC methodology, risk is defined as the probability of an event occurring multiplied by its impact (or severity). The impact is calculated based on three components: hazard, exposure, and vulnerability. CLIMADA captures vulnerability with an impact function taking the hazard as an input and returning a quantification to which extent an exposure will be affected by the hazard due to vulnerability (Aznar-Siguan and Bresch, 2019). Then the final impact is calculated by multiplying the exposure to the output of the impact function. The detailed documentation of CLIMADA can be found in (Kropf et al., 2024).
Model simulations
UrbClim simulations for present-day climate
As a first step, the UrbClim model is run for a historical period of 10 years (2008-2017). The synoptic forcing at the top and lateral boundaries of the UrbClim domain is provided by the ERA-5 re-analysis product of the ECMWF ([Hersbach et al., 2020](https://doi.org/ 10.1002/qj.3803)), which is available at a spatial resolution of 31 kms and at hourly temporal resolution. Next to state-of-the-art meteorological data, a detailed representation of land surface properties is required as input for the UrbClim numerical model, and is achieved with the following datasets:
- Land cover: CORINE (Europe; Copernicus, 2019), WorldCover (rest of the world; Zanaga et al., 2021)
- Building fraction: Global Human Settlement Layer (Corbane et al., 2021),
- Soil sealing fraction: Landsat, Sentinel-1 & VIIRS (Zhang et al., 2020)
- Soil texture: Hengl and MacMillan ,2019
- Normalized Difference Vegetation Index (NDVI): Landsat or MODIS
- Anthropogenic heat flux: Jin et al., 2019
- Digital Elevation Model: COPERNICUS (ESA, 2020)
The UrbClim model produces hourly output for meteorological variables such as temperatures, humidity, wind speed, but also soil properties and energy fluxes for the 2008-2017 period and at 100m spatial resolution, from which are calculated the results for the indicators selectable in the climate risk dashboard.
Next to basic meteorological variables, heat stress is calculated based on the model of Liljegren et al. (2008). Wet bulb globe temperature, a metric that accounts for temperature, humidity and radiation and serves as a proxy for perceived temperature is calculated following ISO 7243 norms based on the meteorological output of UrbClim, solar radiation information from the reanalysis dataset ERA-5 (Hersbach et al., 2020) & a detailed representation of the building footprints within the urban areas to account for cooling due to shading effects. This latter information is obtained from Open Street Map, Google Africa Buildings and Microsoft Building Footprints. This information is used to calculate heat stress at 100m spatial resolution for every hour of the day.
UrbClim simulations for future climate
The UrbClim numerical model is also used to project future climate until 2100 in the same cities. The forcing for these simulations is obtained by combining simulations from the MESMER-FaIR ensemble (see above) and the Earth System Model data archive from the CMIP6 project. Instead of re-running the UrbClim model with MESMER forcing, time series data from the present-day simulations calculated with UrbClim are perturbed with the climate change signal obtained from the combined MESMER-CMIP6 data archive using a quantile mapping bias algorithm (Olsson et al., 2009; Willems and Vrac, 2011). The result of the perturbation based statistical downscaling method consists of time series of the same length and time scale as the historical time series but representative of future climate conditions. A detailed description of the procedure applied can be found in Souverijns et al. (2022). Future climate and WBGT calculations were executed for the three main PROVIDE scenarios:
- 2020 climate policies
- Delayed climate action
- Shifting pathway
For each of these scenarios, the median (50th percentile) as well as the 5th & 95th percentiles of the MESMER-FaIR ensemble runs are extracted and used to force UrbClim, so as to generate an estimate of uncertainty reflecting how the spread in the response from climate models to greenhouse gas emissions influences UrbClim results.
It must be noted that future urban development and growth is not included. As such, the spatial profile of the city is constant in future simulations.
Heat impact metrics
To compute the heat impact metrics ‘Population exposed to heatwaves’ and ‘Population exposed to moderate/high/very high/extreme heatstress’ we input the annual UrbClim indicators 'Heatwave days' and 'Days a year with moderate/high/very high/extreme heatstress' as hazard event data into the CLIMADA model. The exposure in terms of population is obtained from the 'WorldPop Constrained Individual countries 2020 UN adjusted (100m resolution)' dataset (Bondarenko et al., 2020a, Bondarenko et al., 2020b). The WorldPop dataset matches the United Nations national population estimate with data on building locations. Thus, it provides accurate data on the city-level and is well-suited for our calculation in which we extract the four vertices of the rectangular domain of the hazard to trim the original country-level gridded population data into the city-level gridded population data.
The size and distribution of the population stays constant over time, meaning that we only account for changes in climate and not those in socio-economic factors. We furthermore only consider the outdoor temperatures meaning that mitigating or amplifying effects of staying indoors on heat stress are ignored. No particular vulnerabilities, such as age group, are considered.
The impact function is represented by a one-to-one mapping, meaning that the impact is computed by multiplying the population at a given location by the hazard occurring in this location. If a location has a population of 100 and experiences 23 heatwave days on average per year, the impact at the location would be 100 * 23 = 2300 average person-heatwave days per year. We apply the same calculation methodology for the other indicators. The heat impact indicators are available in mean, 5th percentile and 95th percentile for the three main PROVIDE scenarios from 2011 to 2100 decadally. The mean, 5th percentile and 95th percentile are obtained by inputting the mean, 5th percentile and 95th percentile of the hazard data into CLIMADA.
Data processing
Urban heat stress data displayed in the ‘Explore Impacts’ mode
The hourly output from UrbClim is averaged over periods of 10 years to even out the effect of natural variability. On the climate risk dashboard, maps of projected changes in some indicators between the 2021-2030, 2041-2050 and 2091-2100 time periods compared to present-day are shown when the years 2030, 2050 or 2100 are selected, respectively. These changes correspond to those obtained by forcing UrbClim with the 50th percentile of the FaIR-MESMER ensemble. The resolution of the maps is 100 meters. For each city, the shapes shown on the maps correspond to mostly contiguous built-up areas (i.e., represent the urban fabric).
The time series plots shown above the maps visualise the spatially averaged results displayed on the maps, with the confidence interval obtained from UrbClim results obtained when using forcing data representative of the 5th and 95th percentiles of the FaIR-MESMER ensemble.
The indicator ‘Population exposed to heatwaves’ was calculated by, for each location, multiplying the population living in each location by the mean annual number of heatwave days occurring in this location. Population data is taken from Worldpop (constrained UN adjusted 2020), and it is here assumed that the size and distribution of the population stays constant, so that only changes in climate and not those in socio-economic factors are accounted for.
Time series and individual indicator maps can be downloaded individually by clicking on the corresponding buttons underneath. A bulk downloading option is available via ftp upon request (niels.souverijns@vito.be).
Meter-scale modelling
Apart from 100m, for the iconic cities, also 1m-scale heat stress simulations are executed using the HiREx model (Souverijns et al., 2023). This requires a detailed representation of individual features within the city quarters that are modelled, such as trees, buildings (and their height), roads, etc. These simulations are limited to one typical day in summer, due to their high computational costs. By considering the solar zenith and azimuth angles, shaded areas cast by buildings or trees were calculated for each hour of the day, as also the sky view factor (the fraction of the sky hemisphere visible from the ground). Combining the meteorological data with the detailed land surface properties, the HiREx module can iteratively calculate the Wet Bulb Globe Temperature at a 1m resolution, considering shade and solar zenith angles in each model time step. As this approach is nested in the UrbClim simulations, one can do calculations easily for both present and any future scenario calculated by UrbClim.
Marine Climate
Model
GFDL-ESM2M
GFDL-ESM2M is the latest version of the Geophysical Fluid Dynamics Laboratory (GFDL)'s Earth System Model (Dunne et al. 2012). Processes within the atmosphere, land, ocean and sea ice are each represented by a different component, while interactions between these components are also explicitly represented. The atmospheric component simulates the dynamics and composition of the Earth's atmosphere at a temporal resolution of 30 minutes to 3 hours depending on the considered variables and at a spatial resolution of 2° (latitude) × 2.5° (longitude). The land component is constituted of the LM3.0 model that represents water, energy and carbon fluxes in vegetation, soil or snow. The ocean model consists of the Modular Ocean Model version 4p1, which simulates physical ocean processes at a horizontal resolution of 1° (mid-to-high latitudes) to 0.3° (tropics), with 50 vertical layers.
Model simulations
The Adaptive Emission Reduction Approach
As a fully coupled Earth System Model, GFDL-ESM2M calculates its own Global Mean Temperature (GMT) and is therefore not exactly following the GMT trajectories modelled with FaIR for a given scenario, but rather approximating those using the Adaptive Emission Reduction Approach (AERA, Terhaar et al. 2022). This approach relies on a dynamic (every 5-year timestep) estimation of the emission reductions required to stabilise at a given GMT target in the future (such as the 1.5°C goal of the Paris Agreement), based on the relationship between the cumulative emissions used to force the model and the simulated change in GMT.
In practice, the AREA consists of three main steps. First, past anthropogenic warming until the timestep of interest is determined using a 31-year running mean, in order to filter out GMT variations caused by natural variability. This also gives the amount of global warming remaining until the given GMT target is reached. Second, the remaining emission budget that can still be emitted before the target GMT will be reached is estimated using the transient climate response to cumulative emissions (TCRE) up to the current timestep, defined as the ratio of past warming and past cumulative emissions. The remaining emission budget is estimated as the remaining global warming until the GMT target is reached divided by TCRE (assuming a linear relationship). As a last step, an emission trajectory until the GMT target is reached is suggested by distributing the remaining emissions budget over the future using a cubic polynomial function.
Because GFDL-ESM2M is run with the AERA and not driven by FaIR, the GMT and emissions trajectories it simulates do not exactly follow those simulated by FaIR but only approximate them. However, in the climate risk dashboard projected impacts for sectors for which impacts are derived using GMT trajectories from FaIR (for example for the Terrestrial Climate sector) and those for the Marine Climate sector simulated with GFDL are accessible by clicking on the same scenario name, illustrated in both cases by the scenario trajectories simulated by FaIR for the sake of simplicity.
Simulations with GFDL-ESM2M
Three simulations covering the 1861-2500 time period were performed. From 1861 to 2005, historical fossil fuel carbon emissions, non-CO2 greenhouse gases, aerosols and land use change were used. Afterwards, fossil fuel CO2 emissions follow observed emissions until 2020 (Friedlingstein et al. 2023) and projected emissions from the Nationally Determined Contributions (https://climateactiontracker.org/global/temperatures/; accessed December 2021) from 2021 to 2025. After 2025, the model was forced with fossil fuel CO2 emissions identified with the Adaptative Emission Reduction Approach (see previous section) in order to reach the following GMT targets (expressed in comparison with the mean GMT over the 1861-1900 period, representative of preindustrial conditions):
- 3°C, in line with the 2020 climate policies scenario. Once this GMT target is reached, GMT is either maintained around that level or brought back to 1.5°C. This approximates the GMT trajectories simulated by FaIR for the 2020 climate policies then stabilisation and 2020 climate policies then back to 1.5°C scenarios.
- 2.5°C, in line with the 2020 country targets scenario. Once this GMT target is reached, GMT is either maintained around that level or brought back to 1.5°C. This approximates the GMT trajectories simulated by FaIR for the 2020 climate targets then stabilisation and 2020 climate targets then back to 1.5°C scenarios.
- 1.5°C, in line with the Stabilisation at 1.5°C scenario.
In each scenario, the timing at which the target GMT is reached is determined by identifying when the 31-year running mean of GMT arrives within one standard deviation characterizing its natural variations only (about 0.07°C). This standard deviation is calculated from a 500-year long control simulation representative of preindustrial conditions.
Data processing
Time series plots
For most countries with access to the sea, projected changes in the indicators from the Marine Climate sector averaged over these countries' Exclusive Economic Zones (EEZs) can be visualised on the climate risk dashboard. Projected changes in comparison to two different reference periods can be selected: 1861-1900 (representative of pre-industrial conditions) or 2011-2020 (present-day). The shapefiles of the EEZs are obtained from the marineregions.org website (version 11 is used). A 31-year running mean is first applied to the spatial output data from GFDL-ESM2M at yearly resolution to remove the influence of natural variability and focus on the indicators' response to global warming. Then, for each EEZ the data geographically located within the shapefiles are averaged using an area-weighted mean for each year between 2020 and 2300. The data included for each EEZ can be seen on the map displayed under each time series plot. There it can be noticed that the marine areas closest to the shore may be missing, as the separation between ocean and land grid cells in the model doesn't fully match that of the real world. The results obtained after conducting the spatial averaging constitute the best estimate shown on the time series plots, visualised with a line.
Uncertainties around those mean projections are visualised with a confidence interval that is estimated by adding +/- one standard deviation of the variations in GFDL-ESM2M results simulated in a 500-year long control simulation representative of preindustrial climate conditions. It is thus important to highlight that the results for the Marine Climate sector shown in the climate risk dashboard were obtained with one single Earth System Model, while the visualised confidence intervals account only for uncertainty arising from natural variations of the studied indicators around the response to global warming (and not for uncertainty across models as for example is the case for indicators of the Terrestrial Climate sector).
Maps
Maps of projected changes in the indicators from the Marine Climate sector are consultable for the years 2030, 2050, 2100, 2200 and 2300. A 31-year running mean was applied to the spatial output data from GFDL-ESM2M shown on the maps. Projected changes in comparison to two different reference periods can be selected: 1861-1900 (representative of pre-industrial conditions) or 2011-2020 (present-day).
Unavoidable risk graphs
A last graph is available for three indicators of the Marine Climate sector. It shows, for each indicator and for a few timesteps (the 2011-2020 period, 2030, 2050, 2100, as well as 2200 and 2300), the risk that the projected changes in the indicators of interest exceed a level that can be selected from a drop-down menu by the user, and that is expressed as a change compared to the average indicator values over the reference period selected by the user. This risk is derived using a 500-year long control simulation representative of preindustrial climate conditions conducted with GFDL-ESM2M. After averaging over the EEZ of interest, the deviations of the 500 annual mean values from their 500-year mean are calculated. For each year and scenario of interest, these deviations are added to the indicator values obtained after calculating a 31-year running average and spatially averaging the projection data within the EEZ of interest (following the same procedure as for the preparation of the time series plots). A distribution of 500 possible values representing the potential effect from natural variability is thus obtained. The risk that the projected changes in the indicators of interest exceed the level selectable by the user is calculated as the share of indicator values above that level (or below in case the indicator values overall decrease with global warming). This is done for each scenario and timestep, assuming that the distribution of 500 deviations around mean values stays constant over time and in each scenario.
Glaciers
Model
OGGM
We used the open-source Open Global Glacier Model (OGGM v1.6.1; Maussion et al., 2019, Maussion et al., 2024) to simulate individual glacier volume, area and runoff changes for the more than 200,000 glaciers worldwide (Randolph Glacier Inventory, v6). The term glaciers
describes here all glaciers and ice caps outside the Greenland and Antarctic ice sheets (World Glaciers Explorer).
OGGM computes ice flow along a one dimensional glacier flowline, obtained from global digital elevation models and glacier outlines. Annual glacier mass balance (or glacier thinning rate, expressed in m water equivalent per year) is estimated with a temperature index melt model (Maussion et al., 2019). We calibrate the model to match glacier mass change observations averaged over 2000-2020 (Hugonnet et al., 2021) obtained with remote sensing, and use additional in-situ observations from the World Glacier Monitoring Service (WGMS) where available. The gridded monthly climate data used as reference for the 1979 to 2020 period is W5E5v2.0 (Lange and others, 2021). For the initialisation of the model to the 2020 glacier state, OGGM v1.6.1 relies on a dynamic spinup which iteratively searches for a 1979 glacier state and recalibrates the temperature index model to find a dynamically consistent model initialisation run which simultaneously matches (1) glacier area at the inventory date and (2) mass change observations.
For more information, please visit the model documentation website of the used OGGM version or the documentation of the OGGM standard projections.
Model simulation
OGGM simulations
We start our glacier projections from the initial glacier state 2020 obtained from OGGM v1.6.1 as described above. From 2020 until 2100, we forced OGGM with climate change realisations from the MESMER-FaIR ensemble. For each of the 10 PROVIDE scenarios (link to scenario description), 1000 FaIR realisation were computed to account for the climate system response, and for each FaIR realisation 20 General Circulation Models or Earth System Models (in the following 'climate models') were emulated by MESMER. Computing glacier projections for 10x1000x20 experiments was not computationally feasible, nor could we store the input and output data. Instead, we generated quantiles (5%, 25%, 50%, 75%, 95%iles) from the MESMER-FaIR ensemble to generate a smaller ensemble of 100 realisations per scenario (20 climate models, 5 quantiles). The quantiles were chosen to accurately represent the spread of the ensemble for each glacier region. All climate projections were bias-corrected to the W5E5v2.0 dataset (Lange and others, 2021) over the period 2000-2019 to then simulate future glacier projections for all glaciers individually. The projections are available for all mountain glaciers of the world except those in Subantarctic and Antarctic Islands (where MESMER data was not available).
Data processing
General
For the dashboard, we analyse the glacier projections for various geographies, including the country level. For the aggregation, each glacier is assigned to a geography based on the glacier’s terminus position. For each geography and each climate change scenario, we processed the corresponding data to generate three distinct plots, as described below.
The uncertainty in the projections originates from the global climate system response and climate variability as well as the emulated local climate. The uncertainty range is computed from the full ensemble for each scenario, taking the respective weight of each quantile into account. Glacier model uncertainty is not taken into account here.
The glacier variables shown in the dashboard consist of volume and area as percentages relative to the geographies total value in 2020 and the thinning rate in metres of water equivalent (w. e.) per year. A thinning rate of 1 means that, on average, the glaciers lost what corresponds to 1 metre of water across the entire glacierized area in that year. Thinning rate is calculated by dividing the annual volume change by the average area and adjusting for the density of ice and water.
For more details on our data aggregation methods, please refer to our GitHub repository at https://github.com/OGGM/provide, where you can find the actual code used for the creation of the glacier data displayed on the dashboard.
Time series plots
For the time series plots, we sum all glaciers assigned to each geography. Using the uncertainty-propagation method described above, we then generated quantiles for each scenario. The plots show the median and the 5th to 95th percentile interval (uncertainty range) for every five years from 2020 to 2100 for volume, area, and the thinning rate.
Maps
For mapping data onto regular grids, we defined global grids with a resolution of 1° and summed the data from glaciers intersecting with each grid point within a geography. Note that even if a grid cell extends beyond the geography’s (e.g. a country) borders, only glaciers within the current country are considered. For each scenario and grid point, we generated glacier quantiles as previously described. The values on the maps represent the median for each grid point for the selected year, and shown are volume, area and the thinning rate.
Unavoidable risk graphs
In these graphs, we defined thresholds for country-wide glacier volume and area as percentages relative to their levels in 2020. For instance, a volume threshold of 30% indicates that 30% of the glacier volume from 2020 is projected to melt away by 2100. To calculate the probability of exceeding these thresholds, we count the number of climate change experiments of each scenario that surpass the threshold and divide this by the total number of experiments per scenario. A probability value of 0 means that no experiment of that scenario exceeded the threshold, whereas a value of 100 indicates that all experiments surpassed the threshold. These probabilities are assessed separately for each year.
Biodiversity
Models
The Wallace Initiative
The biodiversity information presented here is from the Wallace Initiative. The base Wallace Initiative individually modelled ~135,000 terrestrial fungi, plants, invertebrates, and vertebrates, at warming levels ranging from 1.5°C to 6°C, across 21 CMIP5 climate model patterns at a spatial resolution of ~20km x 20km based on occurrence data obtained from GBIF. More information on the overall project, results, modelling methodology, caveats, and uses can be found in a series of publications (Warren et al. 2013; Warren et al. 2018 a, b; Smith et al. 2018; Jenkins et al. 2021, Saunders et al. 2023, Price et al. 2024a). The data were also used for several figures and tables in Working Group II of the IPCC Sixth Assessment Report (AR6).
The individual species data were then aggregated into a metric called species richness remaining. This was calculated by dividing the number of species modelled as being present in a cell, based on future climate suitability, by the number of species in a cell according to current climate suitability. The number of species modelled makes it a reasonable proxy for the population as a whole. To standardize against specific warming levels, curves were generated for each cell allowing interpolation between modelled temperatures to calculate the percent species remaining at intervals of 0.1°C between 1.5° and 4°C.
The data were then elevationally downscaled to ~1km x 1km (Saunders et al. 2023, Price 2024b) to better understand which areas of each modelled 20km cell or pixel might be lost sooner or persist longer. In short, a given 50 km or 20 km cell is an average of the temperatures for all elevations within that cell (i.e., the average elevation). In areas with varied terrain, some areas will be warmer than the average and some will be cooler. Species in areas that are warmer than the average would be expected to potentially be more susceptible (exposed) to warming, while those in cooler areas would be expected to potentially be less susceptible (or be able to shift into these areas if they are currently too cool). Therefore, species within cooler areas within a climate ‘cell’ or ‘pixel’ would be expected to potentially be able to persist in that area longer.
Wallace Initiative Species Richness Remaining Emulator
The Wallace Initiative Species Richness Remaining Emulator (WISRRE) is a computational efficient tool that replicates the information on species richness remaining generated by the Wallace Initiative without the need to re-run ~135,000 species distribution models. Estimates of species richness remaining are generated by WISRRE relative to 1950-2000 for given temperature changes based on relationships derived from the Wallace Initiative.
Model simulation
WISRRE emulations
WISRRE was used to derive projections of species richness remaining for a combination of 10 scenarios and 100 realisations of emulated temperature fields obtained with MESMER (see methodology for the 'Terrestrial climate' sector). For each decade between 2020 and 2100, long-term mean temperature values were derived using a 30-year window starting 14 years prior to the central year of the decade and ending 15 years after the central year of the decade (e.g. for 2020, the start year is 2006 and the end year is 2015). The resulting temperature fields were then used to drive WISRRE, leading to 9 x 10 x 100 = 9000 decadal estimates of species richness remaining.
Data processing
Time series plots
The time series plots presented on the risk dashboard depict the evolution of the proportion of species at risk from local extinction (i.e., proportion estimated to exceed their climate suitability) over time within each country and for each scenario. To derive these time series, species richness remaining relative to 1950-2000 (baseline, assumption of 100% based solely on climate suitability) derived from the WISRRE runs were first converted to species at risk from local extinction and then spatially averaged over the country areas. For the uncertainty ranges at the right-hand side of the time series, the 90% confidence interval was calculated across the 100 model realisations in 2100.
Maps
The dashboard also includes maps showing the proportion of species at risk from local extinction (loss of climate suitability) in the decades 2030, 2050 and 2100 relative to 1950-2000. In line with the time series, the maps are directly derived from the converted emulation output but showing the individual cells rather than the country average. There is the option to display the maps side by side for comparison or as a difference to allow for easier investigation of spatial differences.
Unavoidable risk graphs
The final graph on the dashboard presents the (un)avoidable risk from changes in proportion of species at risk from local extinction. For each selected scenario, it shows the risk of the proportion of species at risk from local extinction exceeding a pre-defined impact level at present (decade 2020) and for the decades 2030, 2050 and 2100. The risk for the proportion of species at risk from local extinction was determined as the proportion of realisations exceeding the selected impact level.
Global Carbon Cycle
Models
OSCAR
OSCAR is a model of reduced-complexity that describes the interactions between large-scale components of the Earth system that relate to anthropogenic climate change. Its modules are calibrated to emulate the behavior of complex process-based models. OSCAR describes the temporal dynamic of key physical quantities through mass and energy balance. Among the models of its category, OSCAR is one of the most complex yet highly flexible, as sub-models can easily be isolated, or new equations plugged in.
As input, OSCAR takes an annual time series of emission of anthropogenic greenhouse gasses and other climatically active species (e.g. aerosol precursors), as well as time series of land use and land cover change drivers. As output, the model produces an annual time series of global temperature change (without stochasticity) and of any intermediate variable of the system (e.g. greenhouse gas concentrations). The outputs are constrained to match the historical observations. While OSCAR is firstly designed to provide a global perspective, a number of processes are differentiated at the regional scale (most notably, the land carbon cycle). The current OSCAR analyses are based on OSCAR v3.1. The projection for future peatland carbon emissions is achieved by coupling OSCAR to a newly developed peatland carbon emulator OSCAR-peat.
OSCAR-peat
A northern peatland (>30°N) carbon module (OSCAR-peat) was developed to emulate peatland processes from five state-of-the-art process-based land surface models: LPJ-MPI, ORCHIDEE-PEAT, LPX-Bern, LPJ-GUESS and LPJ-GUESS_dynP (‘dynP’ for dynamic multi-peat layers). Each of these simulates the complex peat ecosystem differently, incorporating distinct approaches to representing peat hydrology, biogeochemistry, vegetation and soil thermal dynamics, which are critical drivers of feedback uncertainty. OSCAR-peat is forced by global temperature and CO2 emission anomaly, and simulates northern peatlands’ carbon cycle change, including peatland CO2 emissions, CH4 emissions, Carbon stock change and other carbon cycle related processes.
Model simulation
OSCAR-peat emulations
The OSCAR-peat emulator was coupled to the OSCAR model to simulate the northern peatland carbon cycle, as a response to the emission pathways of the PROVIDE scenarios. In total, 2000 configurations of OSCAR were used, drawn randomly from the pool of all possible parameters values (excluding the peatland parameters) in a Monte Carlo setup. Each set of the five peatland emulator parameters was combined with these 2000 configurations, in addition to one set of simulations with the peatland module turned off. In total, 12000 different configurations were run for every scenario.
The global temperature rise and global atmospheric CO2 concentration were generated with OSCAR, then used as forcing data to simulate the carbon cycle change in OSCAR-peat. The carbon dynamics for peatland was simulated and expressed in terms of CO2 and CH4 emissions and changes in carbon pool. The simulated peatland carbon fluxes were then integrated back into OSCAR’s global carbon fluxes, allowing the climate effects of these fluxes to be accounted for. The net impact of peatlands on climate change and on global biogeochemical cycles is quantified as the difference between the experiments with the peatland module turned on and turned off.
Data processing
Time series plots
The climate risk dashboard presents time series plots that illustrate the median trends for each carbon cycle indicator across different scenarios, along with the 90% confidence intervals. These time series are generated by pooling the simulations from running 2,000 OSCAR model configurations for each of the five peat modules, which represent different land surface models. The results are then adjusted by subtracting the corresponding OSCAR simulations with the peat module deactivated. Data are extracted for each timestep displayed in the tool, covering the period from 2020 to either 2100 or 2300, depending on the scenario. Additionally, values are calculated relative to two reference periods (2011–2020 and 1850–1900). This process yields 10,000 simulations for each carbon cycle indicator, from which we derive the median value, as well as the 5th and 95th percentiles. When deriving the quantiles and median, we weight each simulation by how well it compares to historical observations.
Macroeconomy
Model
Economic Damage Function by Burke et al. (2015)
The macroeconomic damage function from Burke et al. (2015) is an empirical relationship between changes in country-level temperatures and economic growth. Burke et al. (2015) establish a quadratic dependency of temperature on long-term GDP growth across different countries. Productivity is maximized at an annual average temperature of 13°C.
Model simulation
Climate Change Impact on GDP in the Dashboard
To derive the Climate Change Impact on GDP indicator the macroeconomic damage functions were forced with country-level annual temperature time series from MESMER. For each scenario, we used random subsampling to select 100 spatially-resolved temperature trajectories. These trajectories were aggregated by country and used as input data. SSP2 served as the baseline growth rate. The resulting dataset contains 100 GDP estimates across 157 countries for 10 distinct scenarios.
Data processing
Time Series
The Climate Change Impact on GDP timeseries in the dashboard shows GDP in a given year under the influence of climate change compared to what GDP would have been in the same year without climate change. The baseline growth rate follows SSP2. The median, 5th and 95th percentiles are derived from ther 100 GDP estimates.