We propose a new modelling framework suitable for the description of atmospheric convective systems as a collection of distinct plumes. The literature contains many examples of models for collections of plumes in which strong simplifying assumptions are made, a diagnostic dependence of convection on the large-scale environment and the limit of many plumes often being imposed from the outset. Some recent studies have sought to remove one or the other of those assumptions. The proposed framework removes both, and is explicitly time dependent and stochastic in its basic character. The statistical dynamics of the plume collection are defined through simple probabilistic rules applied at the level of individual plumes, and van Kampen's system size expansion is then used to construct the macroscopic limit of the microscopic model. Through suitable choices of the microscopic rules, the model is shown to encompass previous studies in the appropriate limits, and to allow their natural extensions beyond those limits.
Any general-circulation model (GCM) of the Earth's atmosphere will contain a number of parametrizations of important processes that cannot be represented explicitly with the given discretization. These will include unresolved dynamics (such as boundary-layer turbulence) and also physical processes (such as clouds and radiation). Stensrud  provides an overview of the methods currently in use in GCMs and weather forecast models. The problem of parametrization may ultimately be considered as that of developing a statistical description for the subgrid-scale processes, expressed in terms of their dependence on the known resolved-scale state. Conceptually at least, the procedure is analogous to that taken for deriving macroscopic thermodynamics from a microscopic description by means of statistical mechanics . For this reason, there is a lot to learn about parametrization from statistical mechanics, as well as statistical physics more generally, in order to develop subgrid-scale parametrizations in a manner as robust as possible.
The purpose of the present paper is to contribute to the development of such a statistical approach for the parametrization of deep, precipitating convection. Deep convection is a crucial aspect of the tropical climate, and deficiencies in its parametrization are implicated in some notoriously stubborn issues in climate modelling; some examples are equatorial waves , the Madden–Julian oscillation , the spatial distribution of tropical rainfall  and the diurnal cycle . Almost all current parametrizations for GCMs are based on an idealization of atmospheric convective systems as a collection of convective ‘plumes’ . Each plume is embedded within and interacts with a horizontally homogeneous medium called the environment. Thus, the problem becomes how to construct a statistical cumulus dynamics (SCD) for the collection of plumes.
Some rather brutal simplifying assumptions are made in translating even this idealized picture into practical parametrizations [1,8,9]. We do not seek to review all the issues here, but rather focus upon two of the usual simplifications: first, the convection is assumed to be in equilibrium with a slowly varying large-scale forcing; and, second, there are assumed to be very many plumes within a grid box of the parent GCM, such that an ensemble average is sufficient to represent the convective state. Some non-equilibrium models that may describe aspects of the time evolution of convection have been proposed in the literature [10–12]. There have also been explorations of stochastic effects related to uncertainties in the triggering process [13,14], intrinsic fluctuations at equilibrium arising from finite cloud number [15–17] and transitions in cloud morphology . Other studies of stochastic effects have been more heuristic, with the authors attempting to account for generic parametrization uncertainty [19–21].
Here, we propose a simple modelling framework suitable for the study of the statistical dynamics of cumulus clouds, which does not assume equilibrium and treats stochastic effects due to finite cloud number. The framework has been successfully used for chemical and biological applications [22–24] but, to the best of our knowledge, has not previously been exploited in atmospheric science. We will show that, in the appropriate limits, it agrees with previous studies of both stochastic and time-varying aspects of convection. Moreover, the model could easily be extended to permit future investigations of SCD with other assumptions removed; for example, a spatially explicit form could be used to study structures arising from interactions of clouds with their local environment .
The article is organized as follows. In §2, we discuss the idealization of convective systems used as the basis of many current parametrizations, and some recent attempts to incorporate stochastic and time-varying aspects. The proposed modelling framework will be introduced in §3, and its relationship to the methods of §2 will be analysed in some detail. Some examples of numerical results from the introduced models are presented in §4, and conclusions can be found in §5.
2. Idealized picture for convection
(a) The collection of plumes
Following the usual idealization of convection in parametrization schemes [7–9,11,17,26–28], we consider a system of distinct cumulus clouds. Each cloud is described as a ‘plume’ that is characterized by its mass flux, Mi(z), defined by 2.1where the subscript labels the plume, ρ is the density, σi the fractional area occupied and wi the in-cloud vertical velocity. The mass flux is an important variable because it is assumed to dominate the sub-grid scale fluxes . Denoting by χ some intensive variable of interest, its sub-grid flux due to convection is approximated by 2.2the sum extending over all plumes present, with the subscript ‘env’ denoting the environmental value and the overbar and prime denoting, respectively, a horizontal average and a departure from that average. The approximation does require the fractional area occupied by cumulus clouds to be small, but typically this is not much larger than a few per cent. However, it should be noted that in some recent numerical weather prediction models, in which the grid size approaches that of a convective element, such an approximation becomes problematic .
In order to compute Mi(z) and χi(z), a description of the vertical structure of each plume is required, and various models have appeared in the literature [7,28,31]. As the plume ascends, the in-plume and environmental air may interact, with mixing of some environmental air into the plume and of some in-plume air into the environment. There are long-standing debates about these interactions , which we do not revisit. Instead, we will simply assume that some suitable plume model is available. Thus, our concern will not be with the vertical structure of the plumes, but rather with the magnitude of the convection, i.e. how many plumes are to be found within a given area for a given large-scale meteorological forcing?
Atmospheric convection is assumed to be forced by large-scale destabilization processes, such as radiative or advective cooling or low-level moistening. When convection occurs, it tends to restore stability, so that given enough time and a steady enough forcing, a state of equilibrium may be achieved in which the forcing and convective tendencies are in balance. Convective quasi-equilibrium is the notion that the atmosphere is maintained close to such an equilibrium state , and for systems in quasi-equilibrium, then the equilibrium level of convective activity can be imposed in order to set a magnitude for the convection and so close a parametrization.
(b) Concerning bulk models
In actual parametrizations, a common further simplification is the reduction of the system from a collection of individual plumes to a single bulk plume [26–28]. Real cumulus clouds have a wide variety of properties, and may, for instance, extend to different heights in the atmosphere. However, the equation sets typically used to describe a single plume are almost linear. A sum over plumes therefore recovers essentially the same form of equations as for the single plume , with the in-plume values χi of intensive variables replaced by their mass-flux weighted, or ‘bulk’ values. This simplification does have some penalties , most notably the fact that very simple treatments of cloud microphysics and radiative effects are required for consistency.
In the model proposed here, a single type of convective plume is considered, and accordingly we drop all plume subscripts in the following. However, it is important to recognize that this does not imply a bulk assumption. Rather it is done for simplicity and economy of presentation, in order that the main ideas should not be obscured. The extension of the model to multiple plume types is entirely straightforward.
(c) Concerning finite cloud number
Assuming for the moment that an equilibrium state has been reached, we consider now the possibility of fluctuations about that state. Parametrizations usually neglect any such fluctuations, thereby implicitly assuming the number of clouds in a grid box to be large. However, a simple scaling shows that fluctuations due to finite cloud number are likely to be far from negligible. Cloud-resolving model (CRM) studies give values for the mass flux at cloud base of approximately 107 kg s−1 [16,34] for a single moist convective plume.1 The mass flux per unit area that is required to balance typical rates of forcing in the tropics can be estimated to be approximately 10−2 kg m−2 s−1 [12,35], so that for a GCM grid box of size 50–100 km2 only a few clouds can be expected to be present. Clearly, the number density could be somewhat larger in places where clouds cluster together. Nonetheless, the number density has been found to be low in coarse-graining calculations with CRM simulations that do include convective organization [36,37], and can also be seen to be low by inspection of satellite imagery. Convective instability is released in discrete events, the number of which is not sufficient on the scale of a GCM grid box to produce a steady response to a steady forcing.
Fluctuations about a state of equilibrium can be described if it is known how the mass flux is partitioned among individual clouds and how the clouds are spatially distributed. Craig & Cohen  argued that the partitioning can be determined from the relevant equilibrium statistical mechanics, by considering a variable number of non-interacting convective clouds subject to externally imposed constraints. Their predicted probability distribution function (PDF) for the total mass flux within a finite area was a convolution of a Boltzmann distribution for the mass flux per cloud and a Poisson distribution for the number of clouds present. These predictions have proved to be remarkably robust in CRM data [16,34,37,38]. The theory has been translated into a parametrization scheme , with the fluctuations about equilibrium having practical implications for the behaviour of the GCM .
Since this article considers a single plume type only, each cloud is assumed to have the same mass flux, and the Craig & Cohen theory reduces to the prediction of Poisson fluctuations at equilibrium.
(d) Concerning time dependence and equilibrium
For relatively rapidly varying forcings, the equilibrium assumption may break down, so that it becomes necessary to consider the time dependence of convective mass flux [10,34]. However, even for steady forcing, the evolution of a convective ensemble is a question of real interest. It is certainly not obvious a priori that a unique equilibrium state must be reached; the stationary state may be unstable, or multiple equilibria may be possible.
The basic equations that have been used to consider the evolution of a convective ensemble were originally introduced by Arakawa & Schubert , and describe the energy cycle of the ensemble. Denoting by A the vertical integral of in-plume buoyancy and by K the vertically integrated convective kinetic energy, these equations are 2.3and 2.4A is a measure of potential energy known as the cloud work function. Equation (2.3) shows it to be generated through the action of large-scale forcing F and removed through the presence of convection. The quantity γ gives the removal rate per unit of cloud-base mass flux, MB≡M(z=zbase). Both F and γ are calculable for any given plume model, and from such calculations, it is known that the removal of instability is usually dominated by warming as a result of compensating environmental subsidence [7,33]. The cloud work function must be positive in order for convective kinetic energy to be generated, as shown by the first term on the right-hand side of equation (2.4). Kinetic energy is assumed to be removed through a dissipation term, for which the value of the dissipation time scale τD is somewhat controversial. τD has been variously estimated to be in the range 103–106 s.
Equations (2.3) and (2.4) could be integrated if there were some functional relationship between K and MB. One relationship that has been postulated [10,35] is that 2.5with α treated as a constant. This would not appear implausible at first sight, given that K∼ρσw2 and MB∼ρσw. For a single plume type with steady forcing, the postulate gives rise to a damped harmonic oscillator that approaches equilibrium after a few τD . Recently, Yano & Plant  have argued that equation (2.5) is inconsistent with CRM results and theoretical scalings for the dependencies of the equilibrium state [40,41], which better support their postulate of 2.6with β treated as a constant. This is effectively an assumption that the response of deep convection to variations in the large-scale forcing occurs mainly through variations in fractional area rather than through typical in-cloud velocities. For a single plume type with steady forcing, the postulate gives rise to a periodic orbit, with cycles of convective recharge and discharge. Note, however, that the orbit is structurally unstable, such that small changes to the model typically produce a slow spiral in AMB space towards the equilibrium state.
A third prognostic system based on equations (2.3) and (2.4) has also been proposed , and is analogous to the Lotka–Volterra equations of population dynamics, with the clouds competing to consume convective instability, 2.7The form of this system is actually rather insensitive to the postulated relationship between K and MB, but it does require an additional assumption that equation (2.4) approaches equilibrium much more rapidly than does equation (2.3). Further discussion of these points is given by Plant & Yano .
The population dynamics system  provides a good illustration of how the consideration of prognostic systems may prove instructive for GCM parametrizations, even those for which convective quasi-equilibrium is imposed. Shallow, non-precipitating cumulus clouds are typically treated somewhat differently from deep, precipitating clouds in a GCM, possibly through parameter differences within a parametrization, or possibly through the use of different parametrization schemes entirely. Whether a given GCM considers convection to be shallow or deep rests on physically motivated but undeniably somewhat ad hoc criteria. However, should convective ensembles prove to be well described by Lotka–Volterra equations, it then follows that, for two convective types, a globally stable equilibrium state with coexisting shallow and deep clouds exists if and only if the known coefficients A, F and γ satisfy certain inequalities . Otherwise, one of the convective types must be driven to extinction. Thus, in an equilibrium-based parametrization, clear criteria would dictate which scheme(s) to apply.
3. Individual-level model
The stochastic model discussed in §2c assumed convective equilibrium, while the three prognostic systems discussed in §2d all assumed an infinite number of clouds to be present (this is necessary in order for MB to be continuous and dMB/dt to be well defined). We now propose a modelling framework for SCD, which is both stochastic and prognostic. The basic system is formulated in terms of an extremely simple set of probabilistic rules at the level of individual clouds, which are born, interact with their environment through changes to its cloud work function and die. According to our choice of these rules, we can produce models that are the microscopic analogues to any of the prognostic systems above.
Stochastic birth–death processes have previously been used to describe deep convection [13,14,18], with some encouraging results when coupled to idealized models of large-scale tropical dynamics [44,45]. Here, the aim and the context is somewhat different. We will assume the large-scale tendencies to be externally prescribed, in the tradition of idealized CRM and single-column model experiments that have long been used to develop parametrizations for operational weather forecast and climate models. We then seek to make direct links between the microscopic model and the above prognostic systems in a suitable limit, establishing in particular which microscopic processes are required, are admissible or are forbidden in order to make contact with each of the prognostic systems. As discussed previously, stochastic and prognostic aspects of convective parametrization are attracting increasing attention and may be starting to produce promising results [10,11,19,20]. The objective here is to show how those aspects might be combined in a natural way that is consistent with existing studies.
Specifically, we will use van Kampen's expansion  to recover the ordinary differential equations of §2d as the macroscopic, large system-size limits of individual-level models. The leading correction for a non-infinite system is a Fokker–Planck equation describing the fluctuations in N and A. A detailed analysis of those fluctuations is beyond the scope of the present article, but clearly offers promise for extending the validity of stochastic convective parametrizations and possibly also for developing theoretical interpretations of observational data [47,48].
(a) Definition of modelling framework
The microscopic model is described through the PDF P(N,A,τ) for the number of clouds N and the cloud work function A at time τ. The domain of interest contains Ω elements, each element being defined as the minimum area necessary to support a single cumulus cloud. It is not necessary to specify a numerical value for the area, but we could consider approximately 1–5 km2 to be reasonable. The area elements are not labelled, and all elements are considered to have an equal chance of interacting with each other. In other words, no account is taken of whether interacting elements are nearest neighbours or well separated. The model could be generalized to include spatial dependence, with the van Kampen expansion used to derive corresponding macroscopic partial differential equations. However, because such spatial aspects are not treated in the comparison studies of §2c,d, they will similarly be regarded as out of scope here.
The model evolves according to state transition probabilities that represent births and deaths, and environmental destabilization and stabilization. This evolution is governed by the following master equation: 3.1The transition matrix elements T(f|i) denote the probability per unit time of making a transition from an initial state i to a final state f. The master equation is therefore simply a statement of balance for the probability of state (N,A); the first term on the right-hand side of equation (3.1) represents a gain in probability due to transitions to the state of interest, while the second term represents a loss of probability due to transitions from the state of interest.
We now define the processes that can lead to transitions in the state of the system. In considering a possible transition, we may choose to look at either one or two elements, with probabilities 1−μ and μ, respectively. Let us suppose that we look at one element. The total number of elements is designated by Ω, of which N elements are occupied by clouds and E=Ω−N elements are empty. Simple combinatorics dictates that the chance of the single element being currently occupied is N/Ω, while the chance that it is currently empty is E/Ω.
Let us now suppose that the single element chosen is currently empty. The element may become occupied through the formation of a cloud and we denote the probability of this happening as a. We might anticipate a dependence of a on the cloud work function A, with cloud formation being more likely for larger A, but let us reserve judgement for the moment on any such dependence. On the other hand, if no cloud is formed, then the maintenance of an empty element will contribute to destabilization of the atmosphere by large-scale forcing, which may be represented through an increment of s in the cloud work function. From these considerations, and combining the relevant probabilistic factors, we can now write down elements of the transition matrix as follows: 3.2and 3.3where the dots indicate that there are additional contributions.
In table 1, we specify all processes that will be considered as physically plausible in the individual-level model. The processes of cloud formation and large-scale environmental forcing that were just discussed are listed in the first two lines. We refer to this particular process of cloud formation as spontaneous birth in order to distinguish it from other possible formation processes. All of the mathematical expressions appearing in the table are simply composed of the products of appropriate probabilistic factors, constructed analogously to those appearing in equations (3.2) and (3.3).
Suppose that a single element is selected and is found to be occupied. The cloud might die (with probability d), but otherwise its continued existence will stabilize the atmosphere through a reduction of r in the cloud work function. Supposing that two elements are sampled, then they could be both unoccupied, both occupied, or else one is occupied and the other is not. If two unoccupied elements are chosen, then we allow for the possible birth of a cloud (with probability e); if two occupied elements are chosen, then we allow for the possible death of a cloud through competitive exclusion (with probability c); while if one occupied and one unoccupied element is chosen, then we allow that the pre-existing cloud may induce the birth of a new cloud (with probability b). The last of these processes could correspond physically to triggering at the edge of a cold pool produced by pre-existing convection . Should two elements be sampled but the number of clouds not change through one of the above processes, then an appropriate change is made to the cloud work function. Analogously to the changes in cloud work function for the case of a single sampled site, the maintenance of each unoccupied element destabilizes the atmosphere through an increment s, whereas the maintenance of each occupied element stabilizes the atmosphere through a reduction r.
Natural boundary conditions to impose upon the transition matrix elements are the requirements that T(−1,A|0,A′)=0 and T(Ω+1,A|Ω,A′)=0, so that by starting the system in a physical configuration with 0≤N≤Ω, it will not be able to reach an unphysical configuration. Those conditions are satisfied by the expressions presented in table 1.
(b) System size expansion
In order to make contact between the individual-level, probabilistic model and the deterministic ordinary differential equations of §2d, we perform the system-size expansion of van Kampen . Detailed demonstrations of the method are available elsewhere [22–24, 46], but as it may be unfamiliar to atmospheric scientists, an outline will be presented here. For illustrative purposes, we will focus our attention on the spontaneous birth and environmental destabilization processes of equations (3.2) and (3.3), but the manipulations for the other processes in table 1 follow along very similar lines.
For a large enough, horizontally homogeneous domain, we would expect the cloud work function to be almost independent of the system size Ω, albeit with some small fluctuations. The central limit theorem suggests that such fluctuations would be of order . Our simulations of the individual-based models presented here also support such a scaling. The essence of the system-size expansion is to assume this scaling and so decompose the cloud work function into a macroscopic, size-independent, deterministic part φ and a fluctuating, stochastic part λ. Thus, 3.4Similar considerations apply to the number of clouds present, although we would expect this to scale with the system size in the macroscopic limit. Thus, the decomposition takes the form 3.5
To apply these decompositions to the master equation, we introduce in place of P(N,A,τ) a function Π(η,λ,τ), which will describe the probabilities for the fluctuating variables. Considering the left-hand side of equation (3.1), the chain rule immediately gives 3.6and since the time derivatives of the fluctuating variables are to be taken with N and A held constant, equations (3.4) and (3.5) can be used to relate them to the time derivatives of the macroscopic variables. The result is that 3.7
The state transitions in the master equation can be expressed in terms of ladder operators for changes in cloud number and cloud work function. For some arbitrary function f(N,A), these are defined by 3.8and 3.9These operators can be expanded in powers of Ω, reflecting the fact that a single transition in a large system will induce only a small change in the fluctuating variables. Specifically, 3.10and 3.11
Table 1 describes processes associated with just one or two elements of the microscopic model. The macroscopic model is assumed to be much larger and the intensive variables φ and σ describing it will evolve more slowly, in response to changes that have affected the full set of elements. It is therefore convenient to introduce a macroscopic time t through a rescaling of the microscopic time, setting 3.12Similarly, the quantities describing a change to the cloud work function from one or two elements of the microscopic model are also rescaled, 3.13so that and are quantities of order Ω0.
Let us now substitute equations (3.4)–(3.7), (3.12) and (3.13) into the master equation, and also make use of the ladder operator expansions of equations (3.10) and (3.11). This leads to the following contributions for the example processes: 3.14Collecting together the terms at the leading order in Ω gives 3.15Now consider the action of the derivative operators on the right-hand side of equation (3.15). Recall from §3a that we suggested it may be appropriate for the transition probability a to have some dependence on the cloud work function A. From the chain rule, ∂a/∂λ=(∂A/∂λ)(da/dA)=Ω−1/2da/dA, the decomposition of A from equation (3.4) having been used in the final equality. Thus, any dependence of a on A is irrelevant at the leading order level of equation (3.15), and the derivatives on the right-hand side of that equation may be considered to act on Π only.
In order to satisfy the leading order equation, it is sufficient that the macroscopic functions σ and φ should obey ordinary differential equations which can be obtained by equating the respective coefficients of ∂Π/∂η and ∂Π/∂λ in equation (3.15). These are 3.16and 3.17where and could be considered as functions of σ and φ. Notice that we have here stated explicitly the contributions from all of the processes listed in table 1.
The terms at next-to-leading order in equation (3.14) contain powers of 𝒪(Ω−1). Inspection of equation (3.14) shows that they will take the form of a Fokker–Planck equation for Π, which is easily derived but not stated here.
(c) Relation to time-dependent convection models
We now discuss the connections from the macroscopic equations derived in the previous subsection with models that have been proposed for the prognostic description of atmospheric convection, as presented in §2d.
In order to establish such a connection, it is important to recall from §2a that the mass-flux approximation for the description of convective plumes approximates the fractional area occupied by clouds as being small. To make an appropriate comparison to the individual-level model, the equivalent approximation should also be made there. This corresponds to setting E≈Ω in all of the transition matrix elements and results in macroscopic equations that are then reduced to the following: 3.18and 3.19
First let us consider the population dynamics system of Wagner & Graf . This system does not consider the evolution of the cloud work function so all processes in table 1 involving changes to A should be neglected. With only a single plume type being considered, the macroscopic fractional cloud number is proportional to cloud-base mass flux, and hence equations (2.7) and (3.18) may be compared directly. To reproduce the structure of equation (2.7) from the individual-based model, one simply includes the induced birth and competitive exclusion processes from table 1. Values for all parameters of the microscopic model could then be set immediately from the known coefficients of equation (2.7). This would constitute a minimal microscopic model fully consistent with the macroscopic population dynamics system. Indeed, exactly such a population model has been considered in other contexts, including spatial effects and interacting types . Notice that inclusion of the death process from table 1 would produce a more complicated microscopic model that would again be entirely consistent with macroscopic population dynamics, although the character of the fluctuations would be rather different. However, the birth processes from one or two empty elements may not be included.
We turn now to the prognostic convective system defined by equations (2.3) and (2.4) with the Pan & Randall postulate [10,35] of equation (2.5). To obtain this system, the quadratic terms in σ appearing in equations (3.18) and (3.19) must be eliminated by neglecting the microscopic model processes involving two occupied elements. All other processes listed in table 1 could be retained if desired, but for a minimal microscopic model, it is necessary to retain only those processes for which a single element is sampled. Notice that the probability for spontaneous birth must be chosen to be proportional to the cloud work function.
The final prognostic system of interest is that defined by equations (2.3) and (2.4) with the Yano & Plant postulate  of equation (2.6). As for the previous system, microscopic processes involving two occupied elements are neglected. It is also necessary, however, to eliminate the constant term from equation (3.18) by neglecting the process in table 1 of spontaneous birth and also that of birth arising from two empty elements. Furthermore, to obtain the correct structure, in this case, the probability b for the induced birth process must be chosen to be proportional to the cloud work function, a choice that then necessitates the neglect of the modification process from table 1. These decisions could also be supplemented by the optional neglect of strong destabilization to arrive at the minimal microscopic model corresponding to the prognostic system.
Table 2 summarizes the various forms of the individual-level model that are required in order to produce the three macroscopic convective systems in the limit of large system size.
4. Numerical results
In this section, we present some example results obtained from the three minimal individual-level models that, in the large system-size limit, are equivalent to the three prognostic models of convective systems described in §2d.
For each case, we choose consistent macroscopic parameters, in the sense that the systems have the same equilibrium state in the mass-flux limit of a vanishing fractional cloud area. The values taken are consistent with the ranges found in the cited literature; specifically, we have set F=10−2 m2 s−3, γ=1 m4 kg−1 s−2, τD=103 s, β=5×104 m2 s−1 and α=5×106 m4 kg−1. The proportionality factor connecting MB and σ we have set to 0.1 kg m−2 s−1. This particular choice is a somewhat small value that will perhaps overestimate the number of clouds required in order to produce the equilibrium level of mass flux. However, it is convenient in that it allows us to easily compare results from the individual-based models with those from the macroscopic convective systems, without having to take a very large system size or very many realizations of the probabilistic model. For these illustrations, Ω=1000 and 100 realizations have been simulated.
The above macroscopic parameters are sufficient to determine all of the relevant parameters for the equivalent minimal individual-based models of both the population dynamics system  and the Pan & Randall system [10,35]. For the minimal equivalent to the Yano & Plant system , one microscopic parameter remains undetermined. We have chosen this to be μ and have assigned it the arbitrary value of 0.1. It cannot be determined from macroscopic considerations alone, but should properly be set from investigation of the fluctuations in those convective systems that are well described by the model. Of course, the same remark holds for all three models in respect of whether and how any non-minimal processes should also be included in order to account more fully for convective fluctuations.
Figure 1 shows time series of cloud-base mass flux from the three systems, including both individual-level results and the results from the macroscopic equations. The individual-level models do not reproduce the equilibrium state predicted by the macroscopic systems of §2d because those macroscopic systems assume a vanishing cloud fractional area. However, it is straightforward to modify those systems to account for finite cloud fractional area. One can simply take the minimal individual-level model necessary to produce the appropriate form of equations (3.18) and (3.19) and then apply the choices of processes and parameter settings to the complete macroscopic ordinary differential equations, as given by equations (3.16) and (3.17). The results from the ensemble mean of the individual-level models agree very well with these modified macroscopic systems, as indeed they should for a large enough system. This is despite the fact that there are very clear fluctuations in the time series from individual realizations. The difference between the prognostic system of §2d and a realization of the individual-based model is particularly apparent for the Yano & Plant system . As noted in §2d, this system exhibits a periodic cycle of convective recharge and discharge, but we find here that it can be slowly driven towards equilibrium through the effects of finite cloud fractional area. Nonetheless, the periodic cycle remains manifest, even in longer simulations with initial transients removed, a power spectrum of the fluctuations showing a peak associated with the orbital period of , which is 4 h for the parameters used here.
Results for a smaller system size of Ω=100 are shown in figure 2. For this domain, the individual-level model for the population dynamics system does show some departures from the macroscopic limit, which is however more closely respected by the equivalent to the Pan & Randall system. For the individual-level equivalent of the Yano & Plant system, convective activity dies off completely after a few hours of the example realization, never to be resumed. Convection was extinguished in all 100 realizations by 28 h of simulation. For this domain size and with these parameter settings, the fluctuations in the Yano & Plant equivalent are strong enough to be able occasionally to remove all clouds present. This microscopic model does not permit the convective cloud field to recover from such an eventuality since cloud formation may only occur if induced by pre-existing clouds.
Returning now to the Ω=1000 domain, PDFs for the number of clouds present are shown in figure 3. For the population dynamics  and Pan & Randall systems [10,35], the results are very well approximated by a Poisson distribution, in accordance with the theoretical expectations of Craig & Cohen  (§2c). In contrast, the distribution from the individual-level equivalent of the Yano & Plant system  is much wider. This system was not designed to produce a highly stable equilibrium state, but rather to demonstrate the cycles of recharge and discharge that are characteristic of some convective systems. The different distributions for cloud number can be understood in terms of the different mechanisms for cloud formation in the equivalent individual-level models. For the equivalent to the Pan & Randall system, clouds are formed spontaneously at empty elements, and the number of such empty elements deviates only weakly from E=Ω−N≈Ω. By contrast, in the equivalent to the population dynamics and Yano & Plant systems, clouds are formed only in association with pre-existing clouds; N is much more susceptible to fluctuations, and the formation mechanism will itself tend to amplify the fluctuations. The population dynamics system, however, has a compensating mechanism because the removal rate from competitive exclusion also depends on the number of pre-existing clouds.
In reality, both primary and secondary mechanisms of cumulus cloud formation do occur, the extent of each being rather sensitive to the prevailing meteorological and topographical conditions . One might therefore reasonably expect that each of the macroscopic models investigated here could perform well in different limiting cases, where different microscopic processes are more or less important. To capture the full range of convective behaviours, a hybrid of the existing macroscopic models would presumably be needed.
The description of atmospheric convection as a collection of distinct plumes has a long history. It is an instructive basis from which to seek to understand many features of convective systems, and still forms the underlying basis for most current convective parametrizations. Some brutal simplifying assumptions have usually been imposed in such studies, but there has been an increasing recognition in recent years that some of those simplifications may be neither necessary nor desirable. Satisfactory models for SCD remain to be developed, and tools from statistical physics are likely to be required to do so.
In this article, we have proposed a new modelling framework that is well suited to the description of collections of convective plumes. In doing so, we have been mindful of two common simplifications in particular: the assumptions of convective quasi-equilibrium and of large cloud numbers. However, we believe the framework to be easily extendable [22,51] to examine other important issues in atmospheric convection, for example, the role of various (self-)organizational mechanisms in developing spatial structure. Previous authors have developed models for the time evolution of convection and for stochastic effects [10–15,18,25, 35]. The present work attempts to marry some of those earlier models in a unified description that is both stochastic and prognostic from the outset.
The modelling framework is developed from the individual level of single clouds. Each cloud is treated identically and extremely simply here, and is formed, modifies its environment and meets its demise according to straightforward probabilistic rules. Doubtless, there is much scope for elaboration on each of these points. Thus, we prefer to speak of a framework rather than a complete model for SCD. However, the simplest treatment is quite sufficient for the present aims. Great stress has been placed throughout on the notion that the individual-level model should reduce to the systems of some previous studies in the appropriate limits. By means of van Kampen's system-size expansion, we can show that by making appropriate choices of the processes included in the individual-level model, we can recover previous prognostic models in the limits of a large system size and a vanishing cloud fractional area. Moreover, by making appropriate choices of the processes that form clouds, we can also recover previous predictions for fluctuations in cloud number at equilibrium.
As a result, we assert that the proposed framework has been well established as a methodology that both encompasses and extends current attempts to develop a theory of SCD. For instance, we have already been able to gain some insights into previously proposed prognostic systems by establishing and simulating their equivalent individual-based models. It is certainly not obvious from the original articles that the primary difference between the Pan & Randall [10,35] and Yano & Plant  systems is an implicit assumption about the dominant microscopic process of convective cloud initiation. Intermediate models that admit both processes would seem more physically reasonable, could very easily be built in the present framework, which would be well worthy of further investigation.
Useful discussions with Jun-Ichi Yano are acknowledged, and were enabled by a joint project award by the Royal Society and CNRS. Some of the work was conducted during a visiting fellowship at the Isaac Newton Institute for Mathematical Sciences as part of their programme on Mathematical and Statistical Approaches to Climate Modelling and Prediction. I am grateful for many discussions there, for various discussions supported through COST Action ES0905 and also for constructive suggestions on the manuscript from the referees.
One contribution of 13 to a Theme Issue ‘Climate predictions: the influence of nonlinearity and randomness’.
↵1 The cited studies used cloud definitions that will have encompassed shallow, non-precipitating clouds, and so are likely to give an underestimate for the deep, precipitating clouds, which are the focus here. That point reinforces the argument given in the main text.
- This journal is © 2012 The Royal Society