Developing the next-generation climate system models: challenges and achievements

Julia Slingo, Kevin Bates, Nikos Nikiforakis, Matthew Piggott, Malcolm Roberts, Len Shaffrey, Ian Stevens, Pier Luigi Vidale, Hilary Weller

Abstract

Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.

Keywords:

1. Introduction

Climate models are widely used to understand and predict the evolution of the climate system, and, in recent years, have been at the heart of attributing recent changes in climate to human activities. They are therefore of fundamental importance to an increasingly diverse body of users, and so there is an urgent need to improve both their fidelity and their capabilities to provide more confident assessments at regional and local levels of the future risks associated with both natural climate variability and human-induced climate change.

Climate modelling grew out of weather forecasting more than 40 years ago, and is based on the same fundamental sets of equations, solved on similar grids using similar numerical algorithms. Those basic numerical methodologies have barely changed over subsequent decades, except for refinements to the systems of equations to provide more accurate and conservative solutions, and the development of more computationally efficient codes. Initially, climate models were simply atmospheric models forced with observed ocean temperatures, but over the subsequent decades, increasingly complex models of the other components of the climate system—land, ocean and cryosphere—have been added. As a result, unlike numerical weather prediction where there has been a continuous drive to higher and higher model resolution (i.e. grid spacing),1 climate modelling has changed little in structure from its beginnings, with regular grids (based around grid-point or spectral methods) of the order of 200–300 km in the atmosphere (table 1). Essentially, this means that, until very recently, climate models have been integrated at resolutions that cannot adequately resolve weather systems such as mid-latitude fronts and tropical cyclones.

View this table:
Table 1

Progression of UK climate models. (Standard versions are given in ordinary type. HadCM2, HadCM3 and HadGEM1 have contributed to the IPCC Assessment Reports. Those highlighted in bold are recent research developments to investigate the importance of resolution in the coupled ocean–atmosphere system.)

This lack of an increase in resolution is typical across most of the world's major climate modelling centres, with the result that the typical resolution of the models used in the latest Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC 2007) was still 150 km or more. This is despite an increase in available computing power of several orders of magnitude over the last few decades. The reason for this divergence in the emphasis on resolution between weather and climate prediction can be understood in the competing demands of: (i) increasing complexity of climate models with the incorporation of Earth system components (ocean, land and cryosphere), (ii) increasing length of the simulations (decadal to centennial), and (iii) most recently, the emerging need for ensembles of simulations to provide probabilistic estimates of future changes.

Following the Stern Review on the Economics of Climate Change (Stern 2006) and the IPCC AR4 (IPCC 2007), there is now a consensus that climate change is happening. Consequently, the research agenda is moving to the development of prediction systems that can steer the ways in which the public, government and business sectors will adapt to and mitigate the regional and local effects of climate change, especially those associated with hazardous weather. This means that there is an urgent need to produce probabilistic global climate predictions at weather-resolving scales (below 50 km) with lead times of several decades. This is hugely demanding, and while the scientific expertise may be up to the task, the modelling and computational infrastructures are not. A substantial increase in computer power is required, and, although this is gradually being realized in systems such as the UK Research Council's (RCUK) new supercomputer, HECToR,2 it is evident that this increase in power will only be achieved through the use of many tens of thousands of processors, rather than through significantly faster processors.

It is also becoming clear that the legacy codes, which form the basis of the current generation of climate models, are ill-equipped to deal with the massively parallel architectures of the next generation of supercomputers. In the future, increases in computer power will be achieved not through faster processors, but through combining many more processors and through using multi-core processors. Whereas current climate model codes currently operate over only tens to hundreds of processors, in the future, it will be necessary to use ten thousands of processors to achieve the increase in performance required to run at significantly higher model resolutions. Also, the development of multi-core processors introduces new challenges in terms of memory management. Consequently, a significant level of recoding will be required, and, for this reason alone, it is timely to consider whether the current approach to modelling the climate system should be reviewed.

Another important factor in this argument has been the increased understanding of the multiscale nature of the climate system, in which small-scale, high-frequency (hours to days) variations appear to play a key role in determining the large-scale, low-frequency (months to years) evolution of the physical climate system (e.g. Slingo et al. 2003; Lengaigne et al. 2004), especially in the tropics. A good example of this is the multiscale nature of tropical convection, which is frequently organized into coherent eastward and westward propagating disturbances associated with the Madden–Julian Oscillation, tropical cyclones and theoretical equatorial wave structures (e.g. Wheeler & Kiladis 1999; Yang et al. 2003). Climate models are notoriously poor at capturing this organization (e.g. Lin et al. 2006), and this has been variously attributed to the inability of current parametrizations of cumulus convection to represent the upscale energy cascade from the cloud and mesoscale to the synoptic and planetary scales. There are many other examples within the climate system where detailed features (e.g. ocean eddies, marginal sea overflows or orographic effects) may need to be resolved adequately. This paper reports on recent developments in high-resolution climate modelling in the UK, and considers the implications of the results for the next generation of models.

Finally, it is clear that the response of ecosystems and the chemical environment to climate change needs to be factored into the projections for the coming century and beyond. In other words, climate models need to evolve further into Earth system models. This is also immensely challenging because a proper framework does not yet exist for representing living organisms in global models, and traditional approaches using bulk parametrizations may not be appropriate. As with the physical climate system, many of the processes and interactions potentially operate on a very diverse range of temporal and spatial scales. This paper considers how these challenges might be approached and suggests ways in which a new approach to modelling the climate system might be fostered.

2. Lessons from current weather-resolving climate models

Recent developments in high-resolution climate modelling within the UK-HiGEM programme,3 especially in collaboration with Japan at the Earth Simulator,4 have delivered multi-decadal simulations that have provided new insights into important coupled processes operating at weather and ocean eddy scales. Unlike most resolution studies, which have been largely based on atmosphere-only or ocean-only models, or on significantly increased resolution in one component only (e.g. Roberts et al. 2004), this is one of the first times that the resolution of both components has been increased simultaneously to give weather-resolving resolution in the atmosphere and eddy-permitting resolution in the ocean.

One of the most striking results of this study is that the high-resolution atmosphere is able to respond to the fine-scale detail in the ocean-surface temperature fields in a coherent way, with important implications for the mean climate and its variability. For example, it has been shown that oceanic tropical instability waves and the response of the near-surface winds to them (figure 1), both now well resolved in HiGEM, have a fundamental effect on the mean state of the equatorial Pacific Ocean, and hence on the global mean climate and the representation of the El Niño Southern Oscillation (ENSO), which is strikingly improved (Shaffrey et al. in press). Further examples include more realistic representations of tropical cyclones and their influence on the upper-ocean heat budget, as well as mid-latitude frontal systems and their effects on regional precipitation statistics.

Figure 1

Instantaneous fields of surface wind stress divergence (colours) and sea-surface temperatures (contours) from (a) HiGEM1 and (b) HadGEM1, showing the presence of breaking tropical instability waves in HiGEM1, but not in HadGEM1. The coherent atmospheric response to these waves in the ocean is also much stronger in HiGEM1 and comparable with that seen in satellite observations. Adapted from Shaffrey et al. (in press).

The existence of coherent coupling between the ocean and atmosphere on fine spatial scales and on relatively short time scales has challenged the conventional approach to climate modelling, which assumes that sub-gridscale processes can be parametrized within a single component of the system. The results from HiGEM suggest that there may be important scales of coupled behaviour that cannot be parametrized and that will therefore need to be resolved adequately. The implication of this is that there may be a minimum resolution for modelling the coupled system that may be higher than, or at least different from, that for the individual components. It also means that it may be important for the atmosphere and ocean components of coupled models to have resolutions that are roughly equivalent, so that the atmosphere can respond to, and, in turn, force the ocean on commensurate temporal and spatial scales.

The multiscale nature of the coupled processes demonstrated by HiGEM also has wider implications for the extension of physical climate models to full Earth system models, through the incorporation of biological and chemical processes. As a demonstration of this point, figure 2 compares the sea-surface height variability observed by TOPEX/Poseidon satellite altimetry and simulated by HadGEM1 and HiGEM1, along with the curl of the wind stress from HiGEM1. The sea-surface height variability delineates regions where ocean eddies are active, and the wind stress curl shows where the atmosphere is instrumental in forcing upwelling in the ocean, both of which are critical for biological production and carbon uptake by the oceans.

Figure 2

Climatologies of sea-surface height variability (cm) from (a) TOPEX/Poseidon, (b) HiGEM1 and (c) HadGEM1. (d) The climatology of the wind stress curl from HiGEM1. Adapted from Shaffrey et al. (in press).

Only HiGEM is able to capture the sea-surface height variability (i.e. ocean eddy activity) observed along the Southern Antarctic Circumpolar Current Front, the Aghulas and Kuroshio currents and the Gulf Stream, with implications for the energy, heat and salinity budgets. Even at 1/3° resolution, HiGEM's ocean is still deficient in these regions and even higher resolution is likely to be necessary to properly capture the structure of the Gulf Stream, for example. Similarly, only HiGEM can produce the detailed structures in the wind stress curl observed in the QuikSCAT observations (Chelton et al. 2004; Risien & Chelton 2008) along coasts, around islands and in association with strong sea-surface temperature gradients (e.g. Gulf Stream) and large ocean eddy activity (e.g. Southern Oceans). It is these regions that tend to support the largest biological activity in the global oceans, due to upwelling of nutrients, forced by detailed wind structures and also associated with ocean eddy activity and mixing. It is not known yet how critical detailed atmospheric and ocean structures, of the sort produced by HiGEM, may be when a climate model is coupled to ocean biology, for example.

The computational resource required for models of the resolution of HiGEM is still at the limit of what is feasible for the long production runs required for climate change projections. There is also a technological limitation, since even with considerable optimization, a throughput of only 1–2 years of simulation per day was possible, due largely to the nature of the code that has not been designed to work over many processors. These challenges are not unique to this model, but apply across many groups internationally, and will need to be solved if higher resolution models are to be used more widely to simulate the climate system and predict its future behaviour.

3. Numerical methods for multiscale systems

The multiscale nature of the climate system, which the UK-HiGEM project has demonstrated as potentially fundamental for accurate simulations, points to the need for much higher resolution to capture critical aspects of atmospheric and ocean flows. It also challenges our traditional way of modelling the system, and suggests that a new approach may be worth considering. The use of regular grids and, essentially, increasing the resolution everywhere, even though some parts of the domain may not require it—highly expensive computationally—should be reviewed in the light of developments in other disciplines, where sophisticated tools that represent the multiscale nature of the problem (e.g. combustion or flow over vehicles) have been developed using innovative computational fluid dynamics (CFD). New grids that avoid singularities, adaptive mesh refinement (AMR) techniques that allow the grid to evolve with the flow and to place the resolution where the complexity demands it, techniques such as cut cells that can represent complex surfaces (i.e. landscapes) and dynamic load balancing methods to enable high computational efficiency are all methodologies that are used elsewhere and potentially could be brought into climate modelling.

(a) Mesh adaptation

The Imperial College Ocean Model (ICOM) is addressing many of these issues (Pain et al. 2005; Piggott et al. 2008). The ICOM consists of a three-dimensional, non-hydrostatic ocean model, which uses unstructured meshes and anisotropic mesh adaptivity so that a very wide range of coupled solution structures may be accurately and efficiently represented in a single numerical simulation without the need for nested grids. The approach is based upon a finite-element discretization on an unstructured tetrahedral mesh, which is optimized to represent highly complex geometries. Throughout a simulation, the mesh is dynamically adapted in three dimensions to optimize the representation of evolving solution structures (figure 3). Finite-element (and volume) methods on unstructured meshes are gaining popularity in the oceanographic community because they are well suited for handling flows confined in complex ocean basins.

Figure 3

Idealized simulation of a western boundary current using ICOM. The dynamics are generated by an anticyclonic wind field applied to the square domain on a beta-plane. The anisotropic adapted mesh is shown after 1 year of simulated time for (a) a Reynolds number of 625 and (b) 10 000. The mesh can be seen to be resolving the western boundary current, eddies and vorticity filaments in the flow. (c) In (i), it is shown how the number of nodes varies over the simulated year. The spin-up period can be clearly seen. An average of the number of nodes used was taken over the final 6 months of the simulation. (ii) This average, against Reynolds number, is the lower line in the log–log plot. The smallest mesh size used is also averaged over the same period; the number of nodes that a uniform fixed structured mesh would require to achieve this resolution is the upper line in the log–log plot. An interesting result is not only the large difference in the magnitudes, but also the lower scaling with the adaptive mesh. This demonstrates how the anisotropic approach is able to refine in preferential directions only, e.g. orthogonally to the boundary as the boundary layer narrows with increasing Reynolds number.

The non-hydrostatic nature of the code enables it to cope with steep topography, which is essential for handling overflows, for example. The adaptive algorithm also makes use of anisotropic measures of solution complexity and a load-balanced parallel mesh optimization algorithm to vary resolution and allow long, thin elements to align with features such as boundary currents. The ICOM also takes advantage of load-balanced domain decomposition algorithms so that it can run on parallel machines with distributed memory. The intention is to test the scaling and performance of this modelling approach on thousands of processors on the new RCUK supercomputer, HECToR.

The motivation behind the development of ICOM is the recognition that there are a wide range of critical spatial and temporal scales, which must be covered to resolve important physical processes coupled to coastal- and basin-scale ocean flows simultaneously. Furthermore, the fundamentally important role that the ocean plays in climate variability and change requires that features, such as boundary currents, eddies, dense overflows and deep convection events, must be adequately resolved to answer such questions as to how the Atlantic thermohaline circulation will respond to global warming. As the climate change research agenda moves to addressing impacts and considering how to adapt to them, fully integrated models that can address interactions between global and localized phenomena will need to be developed.

In the atmosphere, static mesh refinement is used operationally for weather forecasts using stretched grids (Fox-Rabinovitz et al. 2006) or using one-way nesting (e.g. Capon 2003). Dynamic refinement with two-way nesting of block-structured grids has been studied for use in atmospheric models for some time now (e.g. Fiedler & Trapp 1993; Skamarock & Klemp 1993), and structured hierarchical AMR has been used by groups in the UK and USA (e.g. Hubbard & Nikiforakis 2003; Nikiforakis 2005; Jablonowski et al. 2006). A grid of constant resolution completely covers the computational domain, and a hierarchical system of grid levels is built upon it. The grid levels are composed of mesh patches; their number and location is continuously altered (thus refining and de-refining the domain) in response to the evolving flow. Every level has its own, continuously varying time step, optimized according to the local wind speeds and mesh resolution; at the end of every iteration, the solution and time step across the grid hierarchy is synchronized. Mesh refinement can be either static (e.g. to resolve important local features, such as a megacity and its chemical emissions, and a volcano and its emissions) and/or dynamic to resolve time-dependent aspects of the weather, such as hurricanes, fronts and tropopause folds (figure 4). The hierarchical structure of the computational grid and its inherently dynamic nature makes this approach fundamentally different from ‘nested grids’.

Figure 4

Example of an off-line simulation of a tropopause-folding event (June 1996) using structured hierarchical AMR on a regular longitude–latitude spherical grid (Nikiforakis 2005). The base grid resolution is uniformly 120×60×8, with two additional levels of refinement (×2 and ×3), giving an effective resolution of 720×360×48. The number of computational cells varied between 2.7×106 and 3.1×106 (as opposed to 12.4×106 of the equivalent resolution unadapted grid) and was run overnight on a desktop workstation.

As in the ICOM, mesh adaptation is based on physical criteria or error estimates, and the adaptation is anisotropic and in all three dimensions. However, in the structured AMR case, the hierarchical arrangement of the mesh patches allows temporal as well as spatial refinement (Hubbard & Nikiforakis 2003). The combination of the hierarchical mesh structure and distinct mesh patches makes the approach highly amenable to parallelization, without the need for complex domain decomposition algorithms.

The underlying numerical schemes come from the finite-volume class of methods, and have desirable properties such as conservation and monotonicity. The same combination of numerical schemes and mesh adaptation techniques has been used to study small-scale processes in the atmosphere, e.g. gravity currents (Patterson et al. 2004) and resolved processes in polar stratospheric clouds (Lowe et al. 2003), which demonstrates the potential of these methodologies for an integrated, multiscale approach for atmospheric processes.

Unstructured and polyhedral models of the atmosphere are also now being considered for adaptive refinement of the atmosphere (e.g. Bacon et al. 2000; Läuter et al. 2007; Weller & Weller 2008). These techniques may have advantages in using gradual refinement. Unstructured meshes of quadrilaterals with non-conforming 2 : 1 refinement patterns are also now gaining some popularity (e.g. Fournier et al. 2004).

There are a number of reasons why unstructured adaptive meshes have not yet proved entirely beneficial for atmospheric modelling. Adaptive mesh models often use only second- or first-order accuracy (insufficient for global models), and schemes in which velocity and pressure are stored at alternating locations to improve wave dispersion work best on uniform structured grids. Furthermore, there is an additional cost in storing and navigating the mesh structure, and changes in resolution can reduce accuracy. However, recent developments with a polyhedral model of the global atmosphere have suggested that these problems are surmountable (Weller & Weller 2008). A test-case simulation of a barotropically unstable jet (figure 5) shows that the polygonal mesh reproduces the reference solution well, whereas with the cubed sphere, instabilities in the jet are spuriously triggered at the cube edges and grow unrealistically.

Figure 5

Vorticity of an inviscid, barotropically unstable jet after 6 days simulated using the polyhedral model. The test case is from Galewsky et al. (2004). Each simulation uses a time step of 150 s. (a) Reference solution using a 256×512 reduced latitude–longitude grid (110 210 cells). (b) A non-uniform polygonal mesh (15 255 cells) and (c) its associated solution. (d) A cubed sphere with 2 : 1 refinement patterns (23 280 cells) and (e) its associated solution. Contour interval, 2×10−5 s−1; negative values in blue.

The polyhedral model uses a new multidimensional quadratic differencing scheme that maintains high accuracy where the mesh is distorted or non-uniform, and gives results as accurate as a spectral model, but with the flexibility of an unstructured mesh. These simulations also use an innovative blended grid, which is a combination of a co-located and staggered algorithm. This means that high-order accuracy for mass, momentum and energy conservation and accurate wave dispersion can be achieved on a polyhedral mesh without the need for diffusive differencing, viscosity or filtering. Without this blend, either spurious grid-scale oscillations in vorticity are generated, or the momentum of the jet is degraded for this case.

A full discussion of the application of adaptive methods in atmospheric modelling is provided in Behrens (2006), but the challenges of taking these approaches through to a full climate model remain huge, however, particularly with regard to representing the phase changes of water in the atmosphere (i.e. clouds and precipitation) and ensuring conservation of energy, momentum and tracers.

(b) Mesh generation

Another potentially substantial benefit of using contemporary CFD techniques relates to representing landscapes more accurately in global and regional climate models. The land surface is highly heterogeneous and interactions between the atmosphere, vegetation, soils, orography and urban environments occur on all scales. Structured and unstructured approaches can be used to represent orography more accurately than in current models, and in such a manner as not to generate spurious waves (e.g. in the presence of steep orography). Embedded boundary (or cut-cell) techniques for meshing complex surfaces are already widely used in other branches of CFD and have potentially important applications for climate modelling (e.g. Gatti-Bono & Colella 2006). This approach is fully compatible with hierarchical AMR and allows arbitrary discretizations of the sphere, in place of the regular longitude–latitude one; for example, figure 6 shows a cubed sphere grid with two levels of static mesh refinement.

Figure 6

(a) A cubed sphere with two levels of static mesh refinement. Although hierarchical AMR is visually similar to nested grids, the two approaches are fundamentally different (see Hubbard & Nikiforakis 2003). (b) The complex orography of Scandinavia has been captured by an embedded boundary mesh generation approach. Every mesh patch is integrated at its own time step, thus allowing temporal refinement, which translates to significant savings in CPU and memory times (K. Bates & N. Nikiforakis 2008, unpublished data).

4. Complexity in the climate system

As already discussed, the climate (Earth) system operates on a wide range of spatial and temporal scales with many multiscale interactions. In various respects, the climate could be viewed as a complex system in which self-organization can occur and from which large-scale properties emerge that cannot be inferred from the behaviour of the individual elements (figure 7). Complexity theory is used in many applications (e.g. economics, nervous systems or cell biology), but has not been widely considered as an approach for modelling the climate system, although, of course, the concept of chaos and nonlinearity is at the core of the predictability of weather and climate (e.g. Lorenz 1965; Palmer 1993). The notion of a complex system, as outlined in figure 7, represents rather well our changing perceptions of how the climate (and Earth) system may work in which a continuum of scales interact, and suggests possible new ways in which climate models should be developed in the future.

Figure 7

Schematic of the characteristics of complex systems. Courtesy of Marshall Clemens and New England Complex Systems Institute.

As an example, let us consider one of the major challenges in modelling weather and climate, that of simulating the upscale organization of tropical convection, from the individual clouds to mesoscale (e.g. squall lines), synoptic (e.g. cyclones) and planetary (e.g. Madden–Julian Oscillation) scale structures. Organized tropical convection provides the backbone of tropical weather and climate and is crucial for determining regional rainfall patterns, monsoons and extreme events, such as hurricanes and typhoons. It is now generally accepted that the inability of current climate models to represent organized tropical convection is a major stumbling block in providing confident projections of future changes in climate at the regional and local levels.

Organized convection could be viewed as a complex system in which the individual clouds might be represented by the simple systems in figure 7, cloud clusters by composites of simple systems, mesoscale organized convection by processes of self-organization, leading finally to the emergence of synoptic and planetary scale structures. However, the traditional approach to parametrizing cumulus convection has taken a very top-down view and has sought, empirically, to relate the effects of convection to the structure of the large-scale environment and the resolved scale forcing. It assumes that there is no coherent coupling between the dynamics (i.e. the winds) and the physics (i.e. the thermodynamics) of convection at the sub-gridscale, and thus effectively eliminates the possibility of the self-organization and its emergent properties on the resolved scale. Some progress has been made in applying techniques, such as cellular automata and stochastic physics (e.g. Shutts & Palmer 2007), to represent the upscale organization, but much more research is needed. The emerging capability to perform explicit global simulations at cloud-system-resolving scales (i.e. 1–3 km; e.g. Miura et al. 2007) potentially offers the opportunity to understand how the self-organization of convection works, what controls the emergent large-scale behaviour and hence to explore new ways to parametrize organized convection based on the mathematics of complex systems.

It may also be reasonable to extend the concept of a complex system to the challenge of introducing living organisms into climate (Earth) system models. Theoretical bases for modelling the physical system are much firmer than for natural ecosystems; we do not yet have a clear framework in which to approach the problem of modelling the biosphere, in its broadest terms, that can represent in a functional form how it is influenced by, and itself influences, the climate (Earth) system. Again, how the biosphere evolves and interacts with the physical climate system covers such a huge number of different scales and processes that a radically different approach to that traditionally used in global climate (Earth) system models may need to be considered. Ecosystem modelling, using individual-based models of communities and ecosystems (e.g. Grimm & Railsback 2005), which may be evaluated against measurable properties and local-scale processes, has developed independently. It may be timely to investigate how these approaches might be brought into climate and Earth system modelling to describe the interactions between the physical climate system and the large-scale behaviour of biomes and their related biogeochemical fluxes (e.g. Woods et al. 2005).

Finally, another element of a next-generation climate model may be the incorporation of human responses into the system. As with ecosystems, a different approach will be needed. Agent-based modelling, which describes the actions and interactions of autonomous individuals in a network, is widely used to model social systems and has been incorporated in integrated assessment models (e.g. Moss et al. 2001). The potential to include these methodologies in full climate system models should also be explored.

5. Concluding remarks

This paper has considered recent progress in high-resolution modelling of the climate system and used the results to argue that it may be timely to consider a radically different approach to the development of the next-generation models. The following three major challenges have been identified and possible ways forward have been discussed:

  1. how to represent the multiscale nature of the climate system;

  2. how to develop model codes that will exploit future petascale computing architectures; and

  3. how to represent living organisms in climate (Earth) system models.

Numerical methods that draw on developments in other branches of CFD are showing real promise in enabling the multiscale nature of the ocean and atmosphere to be modelled using AMR techniques. Although still in their infancy in climate-related applications, these techniques offer the real possibility of being able to capture fundamental multiscale aspects of the flow without the prohibitive expense of going to ultra-high resolution globally. At the same time, all climate modelling centres are facing the challenge of how to make their legacy codes fit for use on massively parallel computing systems so that they can exploit the petascale facilities that are already becoming available. Inevitably, this will involve recoding and so it may be appropriate to consider whether the numerical algorithms should also be changed at the same time. The latest developments based on AMR and new grids, outlined in this paper, are already addressing the need for scalable and efficient algorithms.

As climate system models evolve towards Earth system models, the challenges of including living organisms may mean radically different approaches to parametrization, as well as emphasizing the importance of engaging with other disciplines. Climate is a complex system, and it is increasingly evident that the multiscale nonlinear interactions both within a component and between components of the system may require a fundamentally different approach. Frameworks in which the behaviour of individual elements is scaled up to provide information at the resolved scale of the model may involve complex mathematical and statistical techniques. The potential for complex systems analysis to contribute new conceptual frameworks for representing multiscale interactions and hence to play a role in the development of next-generation climate models is proposed.

Alongside these challenges, however, are also some new opportunities, which will serve to revolutionize the ways in which we model the climate (Earth) system. With the advent of more powerful computers, it is now possible, for the first time, to model key processes and phenomena at the resolved scale over large domains, thus enabling multiscale interactions to be explored through the use of computational ‘laboratories’. At the same time, new observations of the Earth system, both remotely sensed and surface based, are providing new perspectives on the four-dimensional evolution of the climate system with unprecedented detail (e.g. Haynes & Stephens 2007). The capability to model the system at a comparable resolution, so that these data can be properly assimilated and also fully exploited in model evaluation, will lead to significant advances in understanding and predicting the climate system.

Acknowledgments

The support of the Natural Environment Research Council (NERC) for the UK-HiGEM, UJCC and ICOM Projects, is gratefully acknowledged, as is the contribution of all members of those consortia to the results reported in this paper.

Footnotes

  • One contribution of 24 to a Discussion Meeting Issue ‘The environmental eScience revolution’.

  • At the European Centre for Medium-Range Weather Forecasts (ECMWF), for example, global forecasts were made at a resolution of approximately 200 km in the early 1980s, increasing to approximately 60 km in the early 1990s and to 25 km in 2006.

  • HECToR: high-end computing terascale resource. HECToR is a Cray machine with an initial peak performance of 60 TFlops (see http://www.hector.ac.uk).

  • UK-HiGEM: UK high-resolution global environment model. A NERC consortium project in collaboration with the Met Office Hadley Centre to develop a high-resolution version of HadGEM1 (60–90 km atmosphere, 1/3° ocean).

  • UK–Japan climate collaboration: joint-funded NERC and Met Office Hadley Centre programme to collaborate with the Earth Simulator Center, Yokohama and exploit the Earth Simulator for multi-decadal high-resolution simulations of current and future climates.

  • This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

References

View Abstract