Royal Society Publishing

Climate predictions: the influence of nonlinearity and randomness

J. Michael T. Thompson, Jan Sieber

Abstract

The current threat of global warming and the public demand for confident projections of climate change pose the ultimate challenge to science: predicting the future behaviour of a system of such overwhelming complexity as the Earth's climate. This Theme Issue addresses two practical problems that make even prediction of the statistical properties of the climate, when treated as the attractor of a chaotic system (the weather), so challenging. The first is that even for the most detailed models, these statistical properties of the attractor show systematic biases. The second is that the attractor may undergo sudden large-scale changes on a time scale that is fast compared with the gradual change of the forcing (the so-called climate tipping).

The current threat of global warming and the public demand for confident projections of climate change pose the ultimate challenge to science: predicting the future behaviour of a system of such overwhelming complexity as the Earth's climate.

In principle, the Earth's climate could be viewed as a deterministic dynamical system. The laws of motion (such as the fluid dynamics of ocean and atmosphere), and the time-dependent forcing (mostly insolation, but also interactions, for example, with the biosphere or geothermal sources) are known such that one ends up with a large system of differential equations determining the future state entirely, using the current state as initial condition. Unfortunately, as Lorenz [1] demonstrated with a simple model for convection, this statement, even though true in principle, is not applicable in practice. In chaotic systems, trajectories from nearby initial conditions diverge from each other at an exponential rate such that the future state becomes unpredictable beyond a time horizon determined by the divergence rate. In the face of this problem, the intuitive justification for attempting long-term climate prediction is that, while the weather is clearly chaotic, the climate is representing the attractor of this chaotic system. If the chaotic attractor is well-behaved and the model simulations are unbiased then ensemble runs of these simulations can reveal the statistical properties of the attractor (for example, mean, variability and frequency of extreme events). Moreover, one may hope that gradual changes in the forcing or system parameters lead to a gradual change of the attractor and its statistical properties. This approach would treat the short-term chaos as noise, thus, making predictions about the statistical properties of realizations of this noise on long time scales. Gradual changes of the forcing can be either man-made (for example, in an emission scenario) or external (for example, astronomical, if one learns from the behaviour of the palaeoclimate).

This Theme Issue addresses two practical problems that make even this type of statistical prediction so challenging. The first is that even for the most detailed models, the statistical properties of the attractor show systematic biases. The second is that the attractor may undergo sudden large-scale changes (noticeable in its statistical properties) on a time scale that is fast compared with the gradual change of the forcing (the so-called climate tipping).

A recent Theme Issue ‘Stochastic physics and climate modelling’ was devoted to reducing the bias of climate models using methods of stochastic modelling [2]. Three contributions to the current issue add further to the theme of reducing the bias in large-scale climate models. As it is unrealistic to resolve the finest relevant scales in Earth system models, modellers have to close their model at some scale, treating subgrid-scale processes as parametric influence. The traditional assumption made at this stage is that the subgrid-scale processes are rapidly achieving equilibrium (in a stochastic sense) subject to a slow forcing from the large-scale (modelled) processes. Thus, subgrid processes would act back on the model only at their equilibrium. One case where this assumption is known to be problematic is cumulus cloud modelling. These clouds occur on a subgrid scale, yet the assumptions of ‘infinitely many plumes per grid box’ (this allows us to take averages) and ‘convection is in equilibrium with a slowly varying large-scale forcing’ are problematic [3]. Plant [4] shows how ideas from the modelling of birth–death processes with statistical mechanics can be transferred to statistical cumulus dynamics.

On a more conceptual level, Kwasniok [5] shows how the influence of subgrid chaotic subsystems can be incorporated into the large-scale model using a clustering algorithm and statistical modelling of the unresolved variables. The approach by Kwasniok [5] is data-based, that is, independent of the physics in the subgrid. Kwasniok's demonstration uses the Lorenz '96 model, which is a set of ordinary differential equations on a ring.

The question at which scale to cut off the model for a simulation (say, a scenario) is also determined by the trade-off of how to allocate computational resources between possible ensemble size and numerical complexity. Ferro et al. [6] devise a simple formula for the idealized case that model complexity is determined by grid resolution and the ensemble consists of trajectories from perturbed initial conditions. The formula can be used to optimize the grid resolution with respect to performance measures such as mean-squared error.

Two of the contributions are concerned with the processing of the statistical climate predictions. Jupp et al. [7] consider forecasts of ternary type, namely those that assign probabilities of, for example, ‘below’, ‘normal’ and ‘above’ to a quantity (precipitation, say) on each grid point on a map. They show how these can be visualized and verified with a colouring scheme. The proposed colour scheme is continuous, and, thus, conveys the full information of the forecast. Iziumi et al. [8] give a demonstration of how one can translate forecasts from regional or global climate models into statistically correct weather series: stochastic weather generators use climate projections from a multi-model ensemble of global climate models to generate local-scale daily weather datasets. These datasets of climatic variables (such as daily maximum and minimum temperature, precipitation, etc.) have to produce the correct statistical features such as the distribution of wet or dry spells. These features enter, for example, crop models to assess the impact of climate change on particular regions: Iziumi et al. [8] test datasets for Japan.

The other strand of this Theme Issue discusses the problem of modelling, predicting and detecting climate tipping. On the phenomenological level, tipping can be described as a strongly nonlinear response of a dynamical system to a change of a system parameter that is itself gradual (slow) over time [9,10]. Assume that for the initial parameter, the system has an attractor that can be viewed on a coarse level of modelling as a stable equilibrium perturbed by noise. One expects that, as the parameter changes gradually over time, the location of this equilibrium changes more or less proportionally. However, if there are positive feedback mechanisms present in the system then these feedback mechanisms can give rise to threshold (critical) parameter values at which the positive feedback causes an abrupt change of the dynamics. One common scenario is that the stable equilibrium collides (owing to the parameter change) with an unstable equilibrium in a fold, such that the equilibrium disappears and the system rapidly jumps towards a distant attractor.

Crucifix [11] gives a review of conceptual oscillator models that are used to explain glacial–interglacial cycles. Without forcing most of these models exhibit self-sustained relaxation oscillations (periodic motion with fast phases and slow phases where the transition to the fast phase can be considered as a tipping point). A large part of Crucifix's review is devoted to the effects that stochastic and astronomical forcing have on these self-sustained oscillations. An alternative, also reviewed by Crucifix, is that the forcing merely induces coherent hopping in double-well potentials (the so-called noise-induced tipping). Ashwin et al. [12] analyse an alternative, less conventional, mechanism for tipping. It is possible that the threshold for tipping is not a critical value of the gradually changing system parameter, but a critical rate of parameter change. This phenomenon was observed in a simple model for the so-called compost bomb instability [13].

The fold-induced tipping scenario as described by Rahmstorf [9] and Lenton et al. [10] promises to be detectable before it happens owing to early-warning signs: before a gradually changing parameter crosses a critical value at which positive feedback causes the equilibrium to disappear in a fold, the attractivity of the equilibrium weakens. Thus, one should be able to discern early-warning signals from time series of measurements in the form of increased autocorrelation and variance. Lenton et al. [14] compare these indicators and their newly developed indicator based on detrended fluctuation analysis. They extract these indicators from palaeoclimate time series that show apparent tipping as well as from model outputs, and test their robustness with respect to method parameters (such as filtering bandwidth, or length of the sliding window). For a system fluctuating around an equilibrium close to a fold, the probability of noise-induced tipping depends on the leading nonlinear term in (what is often called) the right-hand side of the equations. Sieber & Thompson [15] seek to extract the presence of a non-zero nonlinearity in the right-hand side from time series. Beaulieu et al. [16] review statistical techniques that detect if a time series contains a change point (an abrupt change in mean or variance, or in any parameters in an underlying regression model). The paper extends change point detection criteria based on the Schwarz Information Criterion to time series with autocorrelation (a typical feature of climate time series). The methods are illustrated for the CO2 concentration measurements of Mauna Loa, and the Δ14C measurements (which are a tracer for past climate change). Franzke et al. [17] compare estimators for the Hurst exponent from a time series, a measure for self-similarity, and, thus, long-range dependence; the methods employed by Beaulieu et al. and Sieber & Thompson assume autocorrelation only between nearby elements of the time series.

The issue starts off, very appropriately, with a review of the thermodynamic big picture by Kleidon [18]. Kleidon looks at how the Earth maintains its thermodynamic disequilibrium. The principles of non-equilibrium thermodynamics lead to upper bounds on the available free energy generated and consumed by various processes on Earth. Human activity (roughly, the demand for food and from industrial consumption) appropriates a substantial amount of free energy, of the order of 50 TW, which is already of the same order of magnitude as the free-energy generation by abiotic means (approx. 40 TW), and is a large fraction of the free-energy biotic generation (approx. 215 TW). As human appropriation will increase in the near future, the central question when predicting climate change is whether the human activity will negatively impact the ability of the Earth system to generate free energy.

The review by Kleidon serves also as an excellent introduction, giving a sense of the scale of the challenge posed by climate change, and listing direct implications for various forms of renewable energy and planetary engineering.

Footnotes

References

View Abstract