## Abstract

Numerical simulations of the atmosphere are routinely carried out on various scales for purposes ranging from weather forecasts for local areas a few hours ahead to forecasts of climate change over periods of hundreds of years. Almost without exception, these forecasts are made with space/time-averaged versions of the governing Navier–Stokes equations and laws of thermodynamics, together with additional terms representing internal and boundary forcing. The calculations are a form of large eddy modelling, because the subgrid-scale processes have to be modelled. In the global atmospheric models used for long-term predictions, the primary method is implicit large eddy modelling, using discretization to perform the averaging, supplemented by specialized subgrid models, where there is organized small-scale activity, such as in the lower boundary layer and near active convection. Smaller scale models used for local or short-range forecasts can use a much smaller averaging scale. This allows some of the specialized subgrid models to be dropped in favour of direct simulations. In research mode, the same models can be run as a conventional large eddy simulation only a few orders of magnitude away from a direct simulation. These simulations can then be used in the development of the subgrid models for coarser resolution models.

## 1. Introduction

Numerical weather prediction and climate modelling are carried out by assuming that the atmosphere is an ideal compressible gas and obeys the Navier–Stokes equations and the laws of thermodynamics. Additional equations are used to represent phase changes, particularly of moisture. The basic composition of the atmosphere is assumed to be spatially uniform, additional equations are used to represent the transport and mixing of constituents that are not well mixed. Equations are also added to the system to represent radiative forcing, which is strongly dependent on local atmospheric composition, and transfers of heat, momentum and moisture between the surface and the atmosphere.

Observations show that the solutions to these equations are extremely complicated. In figure 1, the observed spectra of atmospheric wind and potential temperature variations at a height of approximately 10 km are shown, calculated from aircraft data. Near the Earth's surface, the effects of orography, land–sea contrasts and features such as buildings and vegetation will lead to even greater variability at small scales. A ‘solution’ of the governing equations can thus only be attempted in an averaged sense and so is an example of large eddy modelling. In fact, the Smagorinsky subgrid model, often used in turbulence resolving simulations, was actually originally formulated for use in global models (Smagorinsky 1963). However, unlike traditional large eddy modelling, the averaging scales that have to be used are many orders of magnitude greater than those that would be required for direct numerical simulation. Combining the evidence shown in figure 1 with the actual value (10^{−5} m^{2} s^{−1}) for the kinematic viscosity of air suggests that a scale of approximately 1 mm is required for a direct solution of the governing equations. This is seven orders of magnitude less than that used in the finest resolution global models, and six orders of magnitude less than that used in the fine-resolution local models used for weather forecasting on the scale of the UK.

Given these limitations, it is somewhat surprising that effective numerical prediction of the weather is possible. However, current experience is that useful predictions of the large-scale weather patterns can be achieved for a week ahead in most cases, although prediction of local variability, particularly of precipitation, is less successful. This is even more surprising because of the lack of observations to define the initial conditions. Typically, there are approximately 10^{6} pieces of data to initialize models with 10^{7}–10^{8} degrees of freedom.

The first reason for this is the prevalence of large-scale coherent features in the atmosphere, which are only weakly influenced by small-scale dynamics. These are illustrated schematically in figure 2. The phenomena on the diagonal have a ratio of horizontal length scale to time scale typical of an observed wind speed. These are essentially nonlinear flows driven by advection. Other parts of the diagram show viscous, acoustic and gravity-wave processes. There are fairly large-scale separations between the advective processes on the diagonal and the other types of dynamics illustrated. This suggests that the averaging procedures used in numerical weather prediction may be successful.

Recent theoretical work has shown why reasonably accurate prediction of the advection dominated flows on the diagonal of figure 2 may be possible without explicit solution of the full equations. Under conditions where either the Earth's rotation or the stratification of the flow has a strong controlling effect, the solution of the governing equations can be accurately represented by the transport of a single dynamical scalar (the potential vorticity) and the moisture. Much of this work is reviewed by Cullen (2006). It is exploited in operational weather and climate models by using a ratio of spatial to temporal averaging equivalent to a typical strong wind speed. Experience has suggested that approximately 50 ms^{−1} gives results that are insensitive to further reductions in the time step. When simulating other types of atmospheric motion, such as gravity waves, this ratio has to be changed to reflect the wave speed. Given the ability to forecast the advection-dominated flows accurately, the limitations of the observed data can then be addressed by defining the initial conditions by blending new observations with a previous forecast. This avoids the need to interpolate values between widely spaced observations. This process is called ‘data assimilation’.

The phenomena on the upper part of the diagonal in figure 2 are associated with the fronts, depressions and anticyclones, which control day-to-day variations in the weather. These are characterized by an aspect ratio *f*/*N*, where *f* is the Coriolis parameter measuring the effect of the Earth's rotation and *N* is the frequency of vertical buoyancy oscillations measuring the stratification. The typical ratio of horizontal to vertical averaging scales in operational weather prediction models is thus chosen to be comparable with *f*/*N*. Higher vertical resolution is used near the Earth's surface so that the effects of smaller scale turbulent transports can be represented. The phenomena on the lower part of the diagonal in figure 2, such as cumulus clouds, are characterized by an aspect ratio close to unity. This means that a significant change in modelling strategy will be required at these scales.

The horizontal averaging scale is then chosen based on the computer power available and the constraints on when the results have to be available. At present, the European Centre for Medium-range Weather Forecasts uses a grid resolution of 25 km, giving an implied averaging scale of approximately 50–100 km for global forecasts. Smaller averaging scales can be used for local forecasts. The Met Office intends to introduce a model with a 1.5 km grid (implying an averaging scale of approximately 5 km) over the UK in 2009. It is then necessary to design a subgrid model to make the solution computable and to represent the statistical effect of the unresolved scales. A more detailed review and bibliography of some of the main issues are given in Cullen (2007). In most large-scale models, the requirement to make the solutions computable is met by using semi-implicit methods of time integration, which filter the wave motions with speeds greater than a typical wind speed. The subgrid modelling for advection can then be achieved by using nonlinear transport schemes that enforce the property that the transported data are bounded by its initial values.

This approach is adequate for much of the atmosphere. However, where there is organized motion on scales smaller than the averaging scale, it is necessary to include specialized subgrid models. The main examples are near the Earth's surface, where there are strong turbulent transports that depend on the stratification and the amount of cloud; where there are convective clouds; and where there are organized regions of wave breaking and thus strong wave–mean flow interaction. The need for such models depends on the averaging scale. In §2, we show that the effect of wave breaking when the waves are forced by orography can be successfully partitioned between resolved and unresolved waves. In §3, we illustrate that the 1.5 km grid shortly to be introduced in the UK forecast model will allow larger convective clouds to be resolved, and thus some of the subgrid modelling of these clouds removed. In §4, we show how very-fine-scale models can start explicitly resolving turbulence near the Earth's surface.

## 2. Dependence of the subgrid model on resolution

We illustrate this using the way in which models represent the effect of orography on the large-scale flow. In reality, the tendency to produce high pressure on the upwind side of mountains and low pressure on the downwind side results in a net force on the mountains, and a corresponding decelerating or drag force on the atmospheric flow. This is a significant term in the atmospheric angular momentum budget, and its accurate representation is important in obtaining realistic forecasts of the evolution of the flow and weather systems. However, the ability of a weather prediction model to explicitly represent the effects is strongly limited by its resolution. For example, as shown in figure 3 (and discussed in more detail in Smith *et al*. 2006), at 60 km resolution only the basic outline of the Alps is captured in the model, the height is significantly underestimated (maximum of less than 2000 m) and there is little or no detail. With increasing resolution, more and more detail is explicitly represented, such that by 1 km, the model appears to have a reasonably faithful representation of the Alpine topography, including individual peaks and valleys.

Figure 4 (courtesy of Stuart Webster) shows the drag on a region encompassing the Alps from a series of forecasts for 8 November 1999 as a function of model resolution. Between 60 and 1 km resolution, the resolved orographic drag (that resulting from a correlation between pressure perturbation and orographic slope) in the model increased from approximately 3.3×10^{11} N to 7.8×10^{11} N. Note that, while there is still an issue as to whether a truly converged state has been reached (Smith *et al*. 2006), it is clear that without an additional representation of the effects of the missing orography, the coarser resolution simulations would underestimate the orographic drag on the atmosphere. Accordingly, a parametrization or subgrid model (Webster *et al*. 2003) is used to provide this drag. In common with other such orographic schemes, it takes measures of the amount of missing orography (essentially by differencing the real and resolved orography) and the atmospheric state (stability and wind speed), and predicts the drag due to low-level flow blocking and the production of orographic gravity waves. Figure 4 shows how this parametrized drag decreases as the model resolution improves (primarily because the amount of unresolved orography decreases). Effectively, this subgrid model is automatically switching itself off as the model resolution improves, and it is pleasing to note that, for this case at least, the total drag is almost independent of resolution. Further work is assessing the extent to which this is true across a wider range of cases.

## 3. Elimination of specialized models as resolution increases

We illustrate this with the problem of predicting cumulus convection. On averaging scales greater than 100 km, it is possible to treat cumulus convection statistically, so that the effects on the resolved flow can be treated as an ensemble average, see Smith (1997). Very sophisticated one-dimensional subgrid models are used for this purpose. On smaller averaging scales, the statistical assumptions used to derive these schemes are no longer valid. There is thus a difficult range of averaging scales where a statistical approach is inappropriate, but an explicit simulation is still very inaccurate. The challenge is to achieve useful results.

We illustrate this with an example taken from Lean *et al*. (2008). It shows a case from a field experiment when the special observations available allowed a good understanding of the processes involved. Radar data with a 5 km resolution are shown for verification. The area of very heavy rain shown near point B in figure 5 was generated by a squall line associated with a cold pool at the surface. This cold pool was generated by previous convection. This chain of processes cannot be predicted without explicit modelling of at least the large-scale aspects of the convection. In the 12 km model results illustrated in figure 5, the area of rain near the squall line is no heavier than in the other areas of rain predicted by the model. Since the subgrid model only uses the local stratification and moisture content, the interaction between convection and dynamics that occurred in reality cannot be represented. In the 4 km model, the subgrid modelling of deep convective transports is removed, and replaced by explicit simulation. The results show a cluster of heavy convective cells near the squall line. In the 1 km model, an area of organized heavy rain is predicted at the squall line. This illustrates that a 4 km resolution is still inadequate for explicit treatment of deep convection, but that some of the effects can be modelled explicitly with a 1 km grid.

Behind (northwest of) the squall line, all the simulations show scattered showers. However, the size and intensity of the showers are essentially an artefact of the model resolution rather than converging to the true solution. Those produced by the 1 km model are too widespread and too small. The detailed discussion given by Lean *et al*. suggests that the unrealistic aspects of the 1 km simulation are linked to properties of the subgrid model used to represent the turbulence in the boundary layer.

## 4. Convergence as resolution increases

Still finer resolution models explicitly resolve boundary-layer turbulence. It is these models that are most usually referred to as large eddy simulation (LES) models (although, as noted above, all of the simulations discussed in this paper are a form of LES).

Figure 6 shows results from LESs of the convective boundary layer (reported in detail in Mason & Brown (1999)). All have a boundary layer depth, *z*_{i}, of approximately 1000 m, and the letters in the simulation names indicate the horizontal resolution: 125, 63, 31 and 18 m for A, C, E and F, respectively. The numbers in the simulation names indicate the basic mixing length used in the Smagorinsky subgrid model, and in all of A29, C14, E07 and F04 it is equal to 0.23 times the grid length. The vertical velocity spectra from these simulations are encouragingly similar to each other at low wavenumbers, but the point at which the spectral power starts to decrease more rapidly than the expected −2/3 slope in an inertial subrange (thin solid line) moves progressively to higher wavenumbers as the resolution improves. The best resolved simulation (F04) appears to follow the −2/3 slope for at least half a decade of wavenumbers.

The total vertical and horizontal variance profiles from these simulations show an encouraging degree of collapse, although, inevitably, the size of the subgrid contribution to the totals is a function of model resolution. At the highest resolution, the totals are dominated by the resolved contribution, except for very close to the surface, where the typical eddy size will become smaller and resolution will always be problematic.

As an aside, comparison of F04 with F14 shows the effects of changing the mixing length (from 4 to 14 m) at fixed resolution. The steep spectral fall-off moves to larger scales with the larger value of mixing length. Indeed, the spectra from F14 are similar to those from C14 (which shares the same mixing length, but has a coarser resolution). This appears to confirm that, in these simulations (which, unlike some large eddy techniques, use no explicit filter), it is the subgrid model that effectively acts as the filter that controls which scales are resolved and which are parametrized. If the mixing length is reduced to very small values relative to the grid, this may cease to be the case with numerical dissipation close to the grid scale taking over. In the well-resolved boundary-layer interior, this may make little difference (as either the subgrid model or numerical dissipation will simply dissipate energy at the appropriate rate in the inertial range), but it can be more difficult to obtain satisfactory results close to boundaries where the simulations will inevitably be less well resolved (Brown *et al*. 2000). Similar difficulties making the LES more sensitive to its subgrid model may occur in and near regions of strong static stability, such as the capping inversion at the top of a layer of stratocumulus cloud (Stevens *et al*. 2005).

## 5. Summary

We have illustrated that atmospheric models on all scales can be regarded as a form of large eddy modelling. The strategy for designing the subgrid models changes as the averaging scale is reduced. At large scales, specialized models are used. These are designed both using statistics based on observations and, increasingly, from processing results from models with much finer averaging scales covering very small regions (Randall *et al*. 2003). These much finer resolution models are much closer to traditional large eddy models.

As more powerful computers become available, averaging scales can be reduced and some of the specialized models can be given up. However, there is a difficult issue to solve of how to deal with poorly resolved phenomena that cannot be validly treated statistically. This is illustrated in §3. This example shows that useful predictive skill is still achievable, although there are inevitably significant limitations.

## Acknowledgements

The authors wish to thank Stuart Webster of the Met Office for providing figure 4 and Humphrey Lean (Met Office Joint Centre for Mesoscale Meteorology) for providing figure 5 in advance of publication. © Crown Copyright 2009, the Met Office, UK.

## Footnotes

One contribution of 16 to a Discussion Meeting Issue ‘Applied large eddy simulation’.

- © 2009 The Royal Society