Royal Society Publishing

Modelling climate change: the role of unresolved processes

Paul D Williams

Abstract

Our understanding of the climate system has been revolutionized recently, by the development of sophisticated computer models. The predictions of such models are used to formulate international protocols, intended to mitigate the severity of global warming and its impacts. Yet, these models are not perfect representations of reality, because they remove from explicit consideration many physical processes which are known to be key aspects of the climate system, but which are too small or fast to be modelled. The purpose of this paper is to give a personal perspective of the current state of knowledge regarding the problem of unresolved scales in climate models. A recent novel solution to the problem is discussed, in which it is proposed, somewhat counter-intuitively, that the performance of models may be improved by adding random noise to represent the unresolved processes.

Keywords:

1. Introduction

It is difficult to think of a more complicated physical system than Earth's climate. Governed by a combination of the laws of fluid dynamics, thermodynamics, radiative energy transfer and chemistry, the climate system is composed of the atmosphere, the oceans, ice sheets and land. Each of these four subsystems is coupled to each of the other three, through the exchange of immense quantities of energy, momentum and matter (Peixóto & Oort 1984). Nonlinear interactions occur on a dizzying range of spatial and temporal scales, both within and between the subsystems, leading to an intricate and delicate network of feedback loops. But climate modellers must not be dismayed by the enormity of the challenge facing them, for, though it is difficult to think of a more complicated physical system, it is equally difficult to think of one that has a greater impact on all the people of the world.

The major difficulty of climate modelling stems from the coexistence of climatological phenomena on a vast range of scales. As an example of this, figure 1 shows that the atmosphere exhibits a quasi-continuous energy spectrum on all observable length scales, from the planetary scale down to just a few kilometres. Describing the ability of the atmosphere to sustain such a wide spectrum of oscillations, Jule Charney, one of the pioneers of atmospheric modelling, eloquently wrote in 1947 that ‘the atmosphere is a musical instrument on which one can play many tunes’ (Daley 1993). Furthermore, a recent analysis of satellite images (Lovejoy et al. 2001) has shown that the atmosphere is scale invariant, i.e. that it looks the same at all magnifications (at least down to 1 km). This fractal-like behaviour is to be expected for a turbulent fluid such as the atmosphere.

Figure 1

Spectrum of atmospheric winds, in the wavelength range 3–10 000 km, as measured by commercial aircraft during the NASA Global Atmospheric Sampling Programme. The zonal (i.e. west–east) and meridional (i.e. north–south) components of the wind are shown separately. For clarity, the meridional component has been shifted to the right by a factor of 10. Reproduced with permission from Gage & Nastrom (1986).

In recent decades, our understanding of the climate has been revolutionized by the development of sophisticated computer models, known as general circulation models (GCMs). GCMs are a representation of the physical laws stated above, expressed in such a form that they are suitable for solution on fast supercomputers. GCMs work by dividing the components of the climate system into large boxes, known as grid boxes, each measuring around 100 km by 100 km horizontally. Crucially, the many important processes and mechanisms that take place on smaller spatial scales than this are too small to be explicitly modelled. A consequence of this, as noted by Palmer (2001), is that the scale invariance referred to above is destroyed.

The most important sub-grid-scale features in the ocean are eddies, which are vortices with diameters in the approximate range 1–100 km. Eddies transport heat, salt and momentum over large distances (McDonald 1999), and it is believed that they contain perhaps as much as 99% of the kinetic energy of the ocean (Open University 2001). In the atmosphere, important unresolved features include gravity waves, which have wavelengths of around 1–10 km and are often visible as stripes in clouds (figure 2). Gravity waves are particularly ubiquitous in the upper troposphere and lower stratosphere (i.e. between around 5 and 20 km above the Earth's surface), and it has recently been shown that they can affect the large-scale atmospheric circulation (Williams et al. 2003). Other important sub-grid-scale features in the atmosphere include convection, convective clouds and small-scale turbulence in the boundary layer (i.e. the part of the atmosphere that is directly influenced by contact with the Earth). All of these features are known to be key aspects of the climate system, owing to their nonlinear interactions with the resolved scales, and yet they are too small to be explicitly modelled. The presence of such critical unresolved processes must surely be one of the most disheartening aspects of climate modelling.

Figure 2

Gravity waves in noctilucent clouds, photographed over Kiruna, Sweden, in August 2000. The wavelength is a few kilometres, and so these waves would be located at the short-wavelength end of the spectrum in figure 1. Reproduced with permission from Dalin et al. (2004).

It has been suggested that random noise should be added to climate models, in an attempt to mimic the impacts of the unresolved processes (Hasselmann 1976). The theoretical justification for these stochastic climate models will be described in detail in §2. For the moment, I simply note that it is truly remarkable that random noise—the very epitome of the unknown and the unpredictable—can actually increase the performance of models. But the list of success stories is rapidly growing: random noise has demonstrated considerable skill in improving weather forecasts (Buizza et al. 1999), in modelling El Niño events (Zavala-Garay et al. 2003), in the study of the atmospheric quasi-biennial oscillation (Piani et al. 2004), in modelling atmospheric convection (Lin & Neelin 2002), in enhancing ocean sea-surface temperature predictability (Scott 2003) and in modelling the impacts of ocean eddies (Berloff 2005).

The purpose of this paper is to present a general review of the problem of unresolved scales in climate models and then to describe some examples of the most recent cutting-edge research using stochastic climate models. In §2, the basic approach to computational climate modelling is briefly outlined. An analogy between unresolved and resolved scales in climate models and the microscale and macroscale in fluid dynamics is used to demonstrate the inadequacy of conventional approaches to unresolved scales and to motivate the need for a stochastic solution. In §3, examples are given of recent stochastic studies of mid-latitude weather systems, El Niño events and the ocean thermohaline circulation (THC). Finally, in §4, I look forward to future developments in the field of climate modelling, by speculating that the stochastic techniques described herein may need to be used widely in the next generation of climate models.

2. Unresolved scales in climate models

Mindful of the crucial role played by sub-grid-scale processes in climate, the development of GCMs has been accompanied by the development of approximate techniques for representing, or parameterizing, their impacts on the resolved flow. The key assumption of this approach is that the number of sub-grid-scale events within each grid box is sufficiently large that a meaningful statistical equilibrium can be defined. Despite the unquestioned partial success of this technique, it cannot be rigorously justified or derived from first principles, and Sardeshmukh et al. (2001) have shown that it contributes towards systematic climatological errors. In order to understand the limitations of this conventional approach to the problem of unresolved scales, we shall now scrutinize the rationale behind the assumption of a statistical equilibrium of sub-grid-scale events.

(a) Computer models of the climate system

The differential equations governing the climate system are well known. In a common approach to solving these equations, errors arise owing to the replacement of the exact derivative terms with finite differences,1 in order to permit a computational solution. To illustrate this, suppose that a climate variable, e.g. temperature, T, varies with horizontal position, x, as shown in figure 3. Substitutions such as the following are made in the exact equationsEmbedded Image(2.1)These approximations replace the exact gradient with a mean gradient, which has the effect of applying a low-pass filter. Structures with length scales smaller than Δx are lost, and only structures with length scales larger than Δx remain. The discretization in time, t, employs the same approximationEmbedded Image(2.2)removing structures with time-scales shorter than Δt. In present-day GCMs, used for long-range climate prediction, the grid spacing and time-step are constrained by computational power to be around Δx=100 km and Δt=30 min. There are many active climatological processes with length scales smaller than 100 km (figure 1) and time-scales shorter than 30 min, as indicated schematically in figure 3. The important task of representing the impacts of such processes on the resolved flow is based upon an assumption that we shall now scrutinize.

Figure 3

Schematic diagram to show how some climate variable, T, might vary with position, x, in the real world (thin line) and in a GCM with grid spacing Δx (thick line). The wiggles in the real-world curve could, for example, be exaggerated versions of the gravity waves shown in figure 2.

(b) The law of large numbers

Imagine tossing an unbiased coin 10 times. Though the most likely outcome is that you will obtain five heads (25% probability), this outcome is far from guaranteed, as shown in figure 4a. The chance of obtaining four or six heads is almost as large (21% each), and there is even a reasonable chance of obtaining three or seven heads (12% each). This wide spread in possible outcomes occurs because statistical fluctuations are relatively large when the number of tosses is only 10. Now imagine tossing the same coin 100 times. As shown in figure 4b, the statistical fluctuations are much smaller in this case, and the probability distribution is narrower. Figure 4c shows that the probability distribution is narrower still when the coin is tossed 1000 times.

Figure 4

Probability distributions to show the number of heads resulting from (a) 10, (b) 100 and (c) 1000 tosses of a fair coin. These are curves of the binomial distribution, B(N, p), with N=10, 100, 1000 and p=1/2. The width of the binomial distribution, relative to the mean, is proportional to Embedded Image.

The probability that at least 60% of the tosses will result in heads is 38% for 10 tosses, 3% for 100 tosses and 0.000 000 01% for 1000 tosses! To use an analogy, if there were two equally matched football teams, A and B, it would not be so surprising if team A managed to win at least 6 out of 10 games played, since 10 games is not very many at all. On the other hand, it would be quite surprising if team A won at least 60 out of 100 games and absolutely astonishing if it won at least 600 out of 1000.

This finding—that the larger the number of random events under consideration, the more predictable the outcome—is called the law of large numbers. The classic application is to the theory of liquids and gases, in which the integrated effect of random molecular motion on the microscale produces predictable dynamics on the macroscale, allowing the thermodynamic quantities (e.g. temperature) to be defined. This is possible only because, during a typical temperature measurement, there are many billions of billions of collisions between molecules and the measuring device.

In climate models, too, it is tacitly assumed that the effectively random sub-grid-scale events are so large in number that their integrated effect on the resolved scales is predictable, allowing it to be included in models. However, in fluids there is an enormous separation of scales between the microscale and the macroscale. There is no such ‘thermodynamic limit’ in the climate system, as suggested by figure 1. Phrased differently, if there were a billion clouds, gravity waves or ocean eddies in a GCM grid box then their impacts on the resolved flow would be predictable, like the temperature of a gas, and the current treatment of unresolved scales in climate models would be defensible. But such a separation of scales between the resolved and unresolved dynamics simply does not exist. The number of sub-grid-scale events per grid box is not large enough to permit the existence of a meaningful statistical equilibrium. This raises questions about the applicability of conventional parameterization techniques, which are founded upon the assumption of the existence of such an equilibrium. These techniques, therefore, do not capture the variability of the sub-grid-scale features, which arises because of departures from the strict validity of the law of large numbers.

Evidence in support of the above assertion was recently presented by Shutts & Palmer (2004). They analysed the statistics of atmospheric convective events that would be sub-grid scale in a standard weather prediction model (of resolution 64 km by 80 km) by running an ultra-high-resolution cloud-resolving model (of resolution 1 km by 40 km). There are 128 high-resolution grid boxes per standard-resolution grid box. The probability distribution of the sub-grid-scale temperature tendencies can, therefore, be estimated and is shown in figure 5. The population of convective events within a standard model grid box is observed to be low, and, correspondingly, in terms of the width relative to the mean, the probability distribution most closely resembles that shown in figure 4a. This means that the convective fluxes through a standard-resolution grid box at any given instant will not necessarily equal the long-term mean, as is assumed in conventional deterministic parameterization schemes.2

Figure 5

Probability distribution for the sub-grid-scale rate of change of temperature (K d−1) at a height of 1 km in the tropical atmosphere. Reproduced with permission from Shutts & Palmer (2004).

(c) A new paradigm for climate models

It is tempting to believe that the problem of unresolved scales will be ameliorated, and perhaps eventually eliminated altogether, by the development of higher-resolution models running on faster computers. But how long must we wait until the currently unresolved processes can be resolved? Suppose that we require an increase in resolution by a factor of 10, i.e. that the grid boxes are each reduced in size by a factor of 10 in each of their three dimensions (but see the following paragraph). In order to satisfy a numerical stability criterion, we must also reduce the time-step by a factor of 10, and so we would need a computer that was 104 times as fast (assuming computational expense scales linearly with resolution). This corresponds to around 13 doublings of the calculational speed of the computer, and, since such speeds have historically doubled once every 18 months (Moore's Law), it is not expected to be achieved for another 20 years.

There are many reasons why we must not simply wait until 2025, in the hope that the problem of unresolved scales will naturally heal itself, as described above. Reliable climate predictions are needed urgently, since after another 20 years of greenhouse gas emissions at current rates it may be too late to take the required mitigating action. Furthermore, even after an increase in model resolution by a factor of 10, there will still be many unresolved processes (figure 1). The most energetic of these will be those that are only just below the grid scale and that are, therefore, not amenable to conventional parameterization because of their low population within a grid box (§2b). Indeed, it seems highly unlikely that any GCM will ever be capable of modelling, deterministically, all of the dynamically important components on all of the relevant time-scales. Palmer (2001) has shown that the impact of unresolved scales cannot be made arbitrarily small simply by increasing the resolution, and Nicolis (2004) has shown that the mean-square error resulting from the neglect of unresolved scales generally grows initially as t2, whatever the resolution. These results perhaps explain why certain model systematic errors have remained, despite many previous resolution increases.

Is there a better solution to the problem of unresolved scales? One that can be enacted now. There has recently been a renewed interest in the belief that, since the sub-grid-scale events are effectively random, they should be explicitly modelled as such by adding random noise to models (e.g. Palmer 2001). This corresponds to an acceptance that the substitutions in equations (2.1) and (2.2) are approximations. Using the spatial discretization as an example, the difference between ∂T/∂x and ΔTx is expected to fluctuate rapidly in space and time, suggesting that the approximation may be improved by adding a noise term, σ, to giveEmbedded Image(2.3)The amplitude of σ may be determined from probability distributions such as that in figure 5 (Shutts & Palmer 2004). The introduction of noise to climate models is intended to re-inject that unresolved variability that is present in the real climate but lost in the model owing to the somewhat inappropriate application of the law of large numbers.

This stochastic approach represents a new paradigm for climate models. Examples of the latest research are described in the following section.

3. Stochastic climate models

Recent studies in which random noise has been added to computer models of mid-latitude weather systems, El Niño events and the ocean THC are now described. These examples illustrate some of the phenomena that may be exhibited by stochastically forced climate models, including noise-induced transitions between different regimes.

(a) Mid-latitude weather systems

The mid-latitude atmosphere exhibits large-scale baroclinic waves, which travel slowly around the globe from west to east. The behaviour of baroclinic waves in the presence of noise has been studied by Williams et al. (2004) in a model of a rotating two-layer fluid. Models of such systems are less sophisticated than full GCMs, since they omit unnecessary details (e.g. topography) in order to expose the fundamental processes at work. The motion of a baroclinic wave of wavenumber 2 (i.e. such that two complete wavelengths fit around the globe) is shown in figure 6. Without noise, the wave propagates around the globe unmodified (with a greater regularity than observed in the real atmosphere, owing to the model's simplifications). If (and only if) random noise is added, then the wave undergoes a rapid transition to a baroclinic wave with wavenumber 1.

Figure 6

The propagation of a baroclinic wave around a latitude circle, as simulated by a two-layer quasi-geostrophic model in (a) the absence and (b) the presence of random noise. The quantity shown is the perturbation potential vorticity in the lower layer. Adapted from Williams et al. (2004).

The transition may be explained by an analogy with a particle moving in the potential well shown in figure 7. Without noise, the system has a tendency to remain in the wavenumber 2 state. When noise is added, the random perturbations increase the likelihood of the particle overcoming the potential barrier and moving to the wavenumber 1 state. This is called a noise-induced transition, and the concept has been used to explain transitions between glacial and interglacial conditions in an energy balance climate model (Nicolis 1993), El Niño and La Niña in a delayed-oscillator model (Stone et al. 1998), multiple decadal-scale El Niño regimes in an intermediate complexity climate model (Flügel & Chang 1999), multiple wind-driven ocean circulation regimes in a double-gyre model (Sura et al. 2001) and multiple ocean THC regimes (§3c).

Figure 7

Particle moving in a bi-modal potential well with two stable equilibria, corresponding to mid-latitude baroclinic waves of wavenumbers 1 and 2.

(b) El Niño events

El Niño, which was first noticed (and given its name) by South American fishermen, refers to the appearance of unusually warm surface water every 3–7 years in the eastern equatorial Pacific Ocean. It is now known that El Niño and its cold-episode relative, La Niña, cause the strongest year-to-year climate signal on the planet, with impacts on temperatures, rainfall and storms around the globe. Effects attributed to recent El Niño events include fresh-water shortages in India, drought conditions and forest fires in Australia, increased rainfall and flooding in Peru and Ecuador and a greater incidence of hurricanes in Hawaii and Tahiti.

Not least because of the economic impacts—global damage estimates can reach £20 billion (Saunders 1999)—it is crucial to be able to predict El Niño events as far in advance as possible. Evidence that the addition of random noise can affect predictability has been presented by Flügel & Chang (1996). They used a coupled ocean–atmosphere model of intermediate complexity to study error growth in an ensemble of runs, i.e. a large number of simulations, which are identical apart from the use of slightly different initial conditions. The purpose of this approach is to represent uncertainties that arise because the initial state can never be perfectly observed. It is a general feature of models of chaotic nonlinear systems that initial-condition errors grow exponentially in time, demonstrating extreme sensitivity to the initial conditions. But the exponential error growth is replaced with square-root error growth when random noise is added, as shown in figure 8. This suggests that ensemble simulations with stochastic physics may lead to better (i.e. larger) estimates of the uncertainty in forecasts of El Niño, since uncertainty is systematically underestimated in conventional (non-stochastic) simulations.

Figure 8

Error growth in eastern equatorial Pacific sea-surface temperature (K), derived from ensemble runs of a coupled atmosphere–ocean model. The dashed lines correspond to the deterministic model and an exponential fit; the solid lines correspond to the inclusion of uncorrelated random noise in the atmosphere–ocean heat flux and a square-root fit. Reproduced with permission from Flügel & Chang (1996).

(c) The ocean thermohaline circulation

The component of the global ocean circulation that arises owing to density gradients is known as the thermohaline circulation (THC), since the density of sea water depends upon both its temperature (thermo) and salinity (haline). In the Atlantic Ocean, the THC brings warm equatorial surface waters northwards to high latitudes, including in the Gulf Stream. Enormous quantities of heat are transported, equivalent to the output of one million power stations, helping to keep Western Europe many degrees warmer than it would otherwise be. Owing to the relatively high surface temperatures, evaporation of water molecules to the atmosphere is large. Salty, and, therefore, dense, surface waters are left behind. The waters eventually cool and become so dense that they sink near Greenland, returning southwards near the ocean floor. The sinking water is replaced by yet more warm surface water from the tropics.

At least, that is how the THC operates today, for observational evidence suggests that it may in fact have two stable modes of operation. The palaeoclimate record obtained from long Greenland ice cores reveals several strong climatic oscillations, which Broecker et al. (1985) attribute to transitions between two THC states. The two modes of operation have also been identified in GCMs (e.g. Manabe & Stouffer 1988). One state corresponds to the ‘active’ THC we observe today; the other corresponds to an ‘inactive’ THC in which the aforementioned heat transport is switched off. The possibility of a future transition from the active state to the inactive state is a vital concern for scientists, policy makers and wider society, owing to its likely impacts on climate (Vellinga & Wood 2002).

Detailed GCM investigations of the effects of random noise on the THC have only just begun, but a foretaste of what they may reveal has been provided by Monahan (2002), who studied transitions between the two THC states in a simple ocean model. Starting from the present-day THC, he introduced increasing amounts of fresh water to the North Atlantic Ocean. This fresh water could represent the effects of melting ice sheets or increases in precipitation, both of which are predicted consequences of anthropogenic climate change. At some point, the THC undergoes a transition to the inactive state. The fresh water is then gradually taken out of the North Atlantic, and at some point the THC is re-established. The resulting hysteresis curves are shown in figure 9, in which the amount of added fresh water is denoted by the non-dimensional parameter μ. Without random noise, the THC undergoes a transition to the inactive state at μ=0.25, but when random noise is added, the transition occurs earlier, at μ=0.20. The reverse transition also occurs earlier when noise is added, making the hysteresis curve substantially narrower. These are further examples of noise-induced transitions (§3a).

Figure 9

Thermohaline circulation hysteresis curves obtained from a conceptual model without (thick line) and with (thin line) stochastic forcing. In non-dimensional units, the control variable μ is a high-latitude surface fresh-water flux into the ocean and the thermohaline circulation strength is 1−y. The curves are traced out in the anticlockwise sense. Reproduced with permission from Monahan (2002).

Rahmstorf (1995) located the position of the present climate system on a deterministic hysteresis curve similar to that in figure 9, but it is clear that this may overestimate the stability of the THC, owing to the concomitant neglect of sub-grid-scale variability. These results raise questions about the use of present-day climate models to predict abrupt shifts in THC regimes, as even the most sophisticated GCMs, including those used for policy formation (Houghton et al. 2001), generally display a bias towards low internal variance.

4. Discussion

A general review of the problem of unresolved scales in climate models has been presented. Important unresolved features include ocean eddies, gravity waves, atmospheric convection, clouds and small-scale turbulence, all of which are known to be key aspects of the climate system and yet are too small to be explicitly modelled. The law of large numbers and an analogy with the microscale and macroscale in fluids have served to demonstrate the inadequacy of conventional approaches to unresolved scales. The alternative stochastic approach, proposed relatively recently, holds that a noise-based solution may be more appropriate. Examples have been given of stochastic studies of mid-latitude weather systems, El Niño events and the ocean THC.

Noise-induced transitions between different stable states (§3a,c) are poorly understood at present, but they may play a crucial role in meteorology, oceanography and climate. Indeed, one of the most important metrics with which to assess the reliability of climate models must surely be their ability to predict the probabilities of such rapid transitions accurately, since these are arguably the climatological phenomena that threaten us most. Transition probabilities are known to depend sensitively on noise levels, and yet we have seen that the sub-grid-scale noise is filtered out of climate models as a necessity. Given that the full spectrum of spatial and temporal scales exhibited by the climate system will not be resolvable by models for decades, if ever, stochastic techniques offer an immediate, convenient and computationally cheap solution. Yet much is still unknown about the potential of stochastic physics to improve climate models, even though it is 30 years since Hasselmann (1976) first raised this possibility.

So strong is the evidence that weather forecasts are improved by random noise that it is now routinely added at the European Centre for Medium-range Weather Forecasts (Buizza et al. 1999). Furthermore, a team at the UK Met Office is currently testing various stochastic physics schemes in their weather forecasting model (Glenn Shutts 2005, personal communication). But, if you look at the contents of any climate journal, you will find that almost none of the modelling studies include noise. In the guidelines for preparing articles for this special issue, authors were encouraged to be more speculative, and perhaps more provocative, than they would normally be in a review article. In response to this instruction, I speculate that climatologists may be missing out on the benefits of noise that are currently being enjoyed by meteorologists. Given that the need for reliable climate predictions is more urgent than ever, this situation must not be allowed to continue. The case for including random noise in the next generation of climate models is strong, and it is my hope that this paper will serve as part of the manifesto for change.

Acknowledgments

I am grateful to Peter Dalin of the Institutet för rymdfysik (Swedish Institute of Space Physics) for supplying the photograph shown in figure 2, and to four referees for their helpful comments.

Footnotes

  • One contribution of 17 to a Triennial Issue ‘Astronomy and earth science’.

  • Spectral approaches, which involve a truncated projection onto a finite set of basis functions, are sometimes used instead of the discretization approach (especially in atmosphere GCMs), but the filtering out of small scales described here still occurs.

  • The use of a trigger function in deterministic convective cloud parameterizations may have the effect of naturally introducing fluctuations about the long-term mean, but such fluctuations are likely to be too small to reproduce the probability distribution of figure 5.

References

View Abstract