The need to represent uncertainty resulting from model error in ensemble weather prediction systems has spawned a variety of ad hoc stochastic algorithms based on plausible assumptions about sub-grid-scale variability. Currently, few studies have been carried out to prove the veracity of such schemes and it seems likely that some implementations of stochastic parametrization are misrepresentations of the true source of model uncertainty. This paper describes an attempt to quantify the uncertainty in physical parametrization tendencies in the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System with respect to horizontal resolution deficiency. High-resolution truth forecasts are compared with matching target forecasts at much lower resolution after coarse-graining to a common spatial and temporal resolution. In this way, model error is defined and its probability distribution function is examined as a function of tendency magnitude. It is found that the temperature tendency error associated with convection parametrization and explicit water phase changes behaves like a Poisson process for which the variance grows in proportion to the mean, which suggests that the assumptions underpinning the Craig and Cohen statistical model of convection might also apply to parametrized convection. By contrast, radiation temperature tendency errors have a very different relationship to their mean value. These findings suggest that the ECMWF stochastic perturbed parametrization tendency scheme could be improved since it assumes that the standard deviation of the tendency error is proportional to the mean. Using our finding that the variance error is proportional to the mean, a prototype stochastic parametrization scheme is devised for convective and large-scale condensation temperature tendencies and tested within the ECMWF Ensemble Prediction System. Significant impact on forecast skill is shown, implying its potential for further development.
One of the principal sources of error in numerical weather prediction (NWP) and climate models comes from sub-grid-scale physical parametrization terms which aim to represent the effect of unresolved physical processes. Parametrization is required for a wide range of physical processes operating from the grid scale (e.g. mesoscale convective systems) to the microscale (e.g. water droplet growth and condensation). The algorithmic form of parametrization terms depends substantially on the resolution of the forecast model into which they are added. For instance, deep convection parametrization (as used in current climate models) is not required in the Met Office's UKV or the HARMONIE (Hirlam Aladin Regional/Meso-scale Operational NWP In Europe) convection-permitting limited area models, which have horizontal grid lengths of 1.5 and 2.5 km, respectively. Traditionally, parametrization has aimed to provide a best estimate of the instantaneous vertical profile of temperature, humidity and wind tendencies based on the variation with height of vertical fluxes of heat, moisture and momentum. These parametrization schemes are sometimes justified as providing ensemble-mean estimates from a sub-grid model that assumes statistical equilibrium (e.g. for convection parametrization). In the case of orographic gravity wave drag parametrization, idealized steady-state models are assumed to provide a reasonable estimate of the instantaneous vertical momentum fluxes. Uncertainty in these parametrization schemes tends to originate from the oversimplified specification of the sub-grid-scale orography and the assumption of linear flow dynamics.
As horizontal resolution increases in forecast models, the statistical equilibrium assumption for deep convection parametrization becomes increasingly untenable and the instantaneous tendency errors grow. However, with increasing resolution comes a better representation of small-scale flow structure, orography, land sea/lake boundaries and land surface type—all of which play an important part in the initiation of convection. In spite of the effect of tendency errors related to the departure from statistical equilibrium, it is likely that this additional fine-scale information greatly improves the performance of convection parametrization. Even though deep convection parametrization was not really designed to operate in models with grid lengths smaller than about 30 km, it appears that they perform sufficiently well without needing major reformulation at least until grid lengths of about 10 km are reached. In this paper, we assume that the performance of parametrization schemes in such high-resolution forecast models is substantially better than the corresponding performance in the same models run with an order of magnitude less resolution (when coarse-grained to the same scale). The fine-scale version of the model is then regarded as truth; the lower resolution version is the target, and the difference between their coarse-grained parametrization tendencies is defined to be the parametrization error due to horizontal resolution.
Shutts & Palmer  analysed numerical simulations of deep convection using a coarse-graining technique to deduce probability distribution functions (PDFs) of diabatic tendencies for different ranges of convective forcing (diagnosed using a convection parametrization scheme). Of particular interest was the dependence of the standard deviation of the diabatic tendencies on the mean tendency. In consequence, some credence was given to the use of a linear standard deviation-to-mean relationship, as it is assumed in the stochastic perturbed parametrization tendency (SPPT) scheme in European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) [2,3]. However, later work reported by ourselves at an ECMWF workshop  suggested a more convincing variance-proportional-to-the-mean relationship. That study was based on comparing parametrization tendencies T1279 and T159 forecasts for matching 12 h period, coarse-grained to a common spatial scale using a quasi-Gaussian spectral filter. The following section describes more fully these calculations and fits the PDF of the convective temperature tendency errors to both Poisson distributions and those deduced from the statistical convection theory of Craig & Cohen . The third section uses these results to define another form of perturbed-tendency, stochastic parametrization scheme. A summary and conclusion are then presented in the final section.
2. Coarse-graining results from ECMWF forecasts
The coarse-graining procedure adopted here is to run two forecasts from the same start time but at very different horizontal resolutions. As in our previous study, the ECMWF Integrated Forecasting System (IFS) is run at T1279 (the ‘truth’) and T159 (the ‘target’) horizontal resolutions. The aim is to characterize the PDF of T159 model error with respect to the T1279 forecasts. Obviously, there are many factors contributing to parametrization error within the design of the schemes themselves and so our truth forecasts will still be deficient at some level. Ideally, one would wish to have a truth forecast in which deep convection is explicitly represented but that is presently only achievable using the experimental ‘small planet’ configuration of the IFS . In some respects, the present coarse-graining results could be interpreted as a lower bound on model error mainly associated with horizontal resolution truncation in NWP models.
Spatial coarse-graining of both forecasts to the same horizontal scale is achieved using the simple quasi-Gaussian filter described by Weaver & Courtier  in which each of the spherical harmonic coefficients in the spectral representation of the parametrization tendencies is multiplied by F(n), which is defined by 2.1where LR is the horizontal filter scale, a is the Earth's radius and n is the total wavenumber (more precisely, the degree of the associated Legendre function in the spectral expansion). Coarse-graining in time is achieved using a triangular weighting function G(t) given by 2.2where k is the number of hours into the forecast with k=1,2,3…11. The motivation for this 12 h time window centred on T+6 h was to coarse-grain over a time period that roughly matches the 6 h decorrelation time of the pattern field in the SPPT scheme. It is assumed that any initial model imbalance will not affect the calculation unduly and that the synoptic-scale flow in each of the two forecasts is essentially the same up to T+12 h.
These coarse-graining filters are applied to physical parametrization tendencies in both the truth and the target 12 h forecasts and the filtered fields are then archived every forecast. In the post-processing stage, parametrization fields from both forecasts are extracted on the same latitude/longitude grid. Grid points in the target forecast with tendencies lying within specified narrow ranges are identified and the PDF of the corresponding tendencies in the same grid points in the truth forecast is determined. The variation of this PDF with increasing target forecast tendency is of particular interest.
The coarse-grained data are provided by two sets of 24 forecasts made at T1279 (16 km grid) and T159 (126 km grid)—both with the same 91 vertical levels. The forecast start-times are 12Z on the 20th day of each month in 2009 and 2010. The coarse-graining scale LR is chosen to be 250 km in one case and 500 km in another, the latter roughly matching the quasi-Gaussian scale of the dominant pattern used in the ECMWF SPPT scheme. The hope is that the PDF of parametrization error can be expressed as a function of mean tendency and that this information can be used to improve the SPPT scheme or develop a new stochastic formulation.
As there exists a statistical theory for convection , it is appropriate to focus on model convection parametrization. Firstly, though, it is instructive to look at the variation of the standard deviation of the T1279 (truth) parametrized temperature tendencies about their mean when sampled according to narrow ranges of T159 (target) temperature tendency. Figure 1 shows this dependence for total parametrized temperature tendency and its individual components at 400 hPa (where latent heat release in condensation tends to maximize in the tropics). For reference, the contribution from advective dynamical temperature tendency is shown too, and when compared with the total physical tendency gives credibility to the common assumption that the latter is more uncertain than the former. The first thing to note is that the biggest range of values comes from deep convection parametrization and large-scale latent heat of condensation/evaporation. The parametrized convection tendencies are predominantly on the positive tendency side since they represent subsidence warming. The dashed and dash-dotted lines show that the standard deviation of the temperature tendencies are approximately proportional to the square root of the mean tendency, i.e. the variance is proportional to the mean. A closer look at this mean–variance relationship can be found in the last column of table 1, which more explicitly reveals that it holds quite well for the higher temperature tendencies values where the ratio of the variance to the mean is nearly constant, i.e. 0.35, although it is not so clear for the weaker tendencies. On the other hand, the second column of table 1 reflects the fact that the mean–standard deviation relationship is far from fulfilled. In contrast to the convection temperature tendency, the radiation temperature tendency shows a very different relation, with a minimum at about −1 K per day and some asymmetry about this point. The fact that the minimum is displaced to negative values probably corresponds to regions of small cloud cover at 400 hPa and therefore dominated by clear-sky infrared cooling, which one might expect to be similar in the T1279 and T159 forecasts. It should be noted as well that this minimum does not fall to zero like the other parametrization uncertainties and this is only due to the thermal long-wave radiation (not shown). One of the reasons may be thin clouds, e.g. cirrus cloud differences between T1279 and T159 forecasts for almost clear skies. Additional very small contributions to the parametrized temperature tendency from parametrized turbulent dissipation and non-orographic gravity wave drag can be ignored (not shown).
The total parametrized temperature tendency tends to follow the radiation tendency on the negative side and lies midway between the large-scale condensation and convection curves on the positive side (figure 1). This is because infrared cooling in the radiation scheme affects all grid points (i.e. the bigger the bullet, the higher the number of grid points), whereas the cooling due to large-scale evaporation will be mainly associated with the evaporation of ice crystals and super-cooled droplet clouds which are less widespread. The dotted blue lines are included by reference and represent the assumed linear relationship between standard deviation and the mean in the SPPT scheme. It appears that this relationship is not supported by these coarse-graining results although it could be argued that most of the data points are close to the origin where a linear fit may suffice.
The fact that the variance seems to be proportional to the mean is reminiscent of Poisson statistics and, particularly in respect of convection, suggests that the assumptions underpinning the Craig–Cohen statistical model of convection might also apply to parametrized convection.
The Craig–Cohen  statistical model of convection represents a region of deep convective clouds in terms of their vertical mass fluxes alone. The ensemble-mean mass flux (〈M〉) is assumed to be determined by some large-scale forcing but fluctuations in the areal-mean mass fluxes are permitted. Key assumptions supporting the theory are that all clouds are independent of one another and that the mean mass flux per individual cloud (〈m〉) is independent of the large-scale forcing. Cloud mass flux contributions to the total mass flux (M) follow a homogeneous Poisson process with rate parameter equal to the 1/〈m〉. Consistent with this, the individual cloud mass fluxes m have an exponential probability distribution. With the number of clouds N in the given region also following a Poisson distribution (with respect to the ensemble-mean cloud number 〈N〉), the PDF for M (equation 2.3) can be determined .
In order to examine this connection with the Craig–Cohen model, frequency distributions at 500 hPa of coarse-grained T1279 (truth) temperature tendencies are plotted for grid points that in the T159 (target) forecasts have coarse-grained tendencies lying in narrow ranges of about 0.16 K/day. As stated earlier, the difference between the 500 km coarse-grained T1279 and T159 tendencies can be thought of as a lower bound on model error and essentially due to insufficient horizontal resolution in the latter. The frequency distribution data are approximated by three separate distribution functions P(parameter1,parameter2,k),
— Craig–Cohen distribution: 2.3with α=〈N〉, β=〈m〉 and k=M in the notation of Craig and Cohen and I1() is the modified Bessel function of first order. Note that 〈M〉=〈N〉〈m〉, i.e. αβ equals the mean.
— Poisson distribution: 2.4with λ equal to mean and variance.
— normal distribution: 2.5where μ is the mean and σ is the standard deviation.
Figure 2 shows frequency distributions for four narrow ranges of T159 tendency and associated best fits for the above distribution. Figure 2a shows the case of very weak coarse-grained tendencies (0.08–0.24 K/day) in the T159 forecasts. Both Poisson and Craig–Cohen distributions can provide a satisfactory fit but the normal distribution is unable to represent the almost exponential decay of the frequency distribution. Current stochastic parametrizations like the SPPT scheme assume normal distribution functions and so this finding could be of some importance, especially for weak tendencies where there are a lot of cases. Figure 2b for the T159 forecast tendency range (1.01–1.17 K/day) still shows a frequency distribution that decreases monotonically and, again, Poisson and Craig–Cohen distributions offer the best fits. In figure 2c for the tendency range 1.94–2.10 K/day, the shape of the frequency distribution is now dominated by a maximum located close to (but slightly less than) the T159 tendency range. Now the normal distribution is a much better fit and the Craig–Cohen distribution is again the best. Figure 2d shows the frequency distribution for the highest T159 tendencies, i.e. in the range 4.28–4.44 K/day where there are fewer cases. Again the Craig–Cohen distribution gives the closest fit and now the normal distribution curve has a better fit than the Poisson distribution (noting however that Poisson curve fitting is disadvantaged by only having a single parameter).
Eqn (18) of Craig & Cohen  for the variance of the convective mass flux M can be written as 2.6where δM=M−〈M〉. Because the mean of m individual cloud mass flux 〈m〉 has a constant value, owing to the aforementioned assumption of independence of the large-scale forcing, equation (2.6) implies that the variance of the cloud mass flux is proportional to the mean mass flux. As convective temperature tendencies scale as the convective mass flux to first approximation, this relation is consistent with the form of the convective tendency curve in figure 1.
These results suggest that the assumptions made by Craig and Cohen to formulate their statistical theory of convection usefully apply also to parametrized convection. However, a strict interpretation of their theory would require that the parameter 〈m〉 (β here) be constant and independent of the large-scale forcing (as represented by the narrow range of T159 temperature tendencies here). The β values that provide best fits in figure 2 are 0.62, 0.59, 0.37 and 0.23 (in order of increasing T159 tendency range) and so decrease with large-scale forcing. Our feeling is that one should not take such an idealized model too literally in respect of quantitative application, given the severity of its underlying assumptions and the way convective mass flux has been interpreted as temperature tendency here. Nevertheless, it is interesting to see the effect of forcing β to have a 0.45 constant value in the Craig–Cohen distribution function (figure 2). The constant β Craig–Cohen distribution has a fitting curve performance lying between the original two-parameter Craig–Cohen distribution and Poisson distribution, and is better than normal distribution (except for the last plot for the tendency range 4.28–4.44 K/day). Therefore, it seems necessary for the Craig–Cohen distribution to use two parameters in order to achieve a satisfactory fit.
Nevertheless, all of them provide some statistical information that can be used in otherwise ad hoc stochastic parametrization. The fact that the explicit large-scale condensation/evaporation temperature tendencies also follow a variance-proportional-to-mean relationship, with similar frequency distributions (not shown), is interesting. It may result from the close linkage between resolved diabatic processes and parametrized convection in the tropics. It is interesting to note that variables other than temperature like vorticity or specific humidity also follow this relationship (plots not shown).
3. Stochastic parametrized tendency perturbation scheme
(a) Preliminary tests: modified stochastic perturbed parametrization tendency
A preliminary test to stochastically perturb the parametrized convection tendencies, in such a way that the variance is proportional to the mean, was carried out within the framework of the SPPT formulation. This was achieved by substituting the model parametrized tendencies for each prognostic variable at time step n with a pseudo-tendency equal to the geometric mean of and some constant reference tendency (together with a factor to preserve the correct sign), i.e. 3.1where is the required tendency perturbation; Ψn is the spectral pattern generator field. The reference tendencies for temperature, water vapour and momentum were determined using the coarse-graining procedure described in §2.
This first-attempt scheme was tested in 10-day EPS forecasts with the ECMWF IFS and only convection parametrization tendencies were perturbed.
The probability skill scores (not shown) were in general a bit worse for the first 5–6 days lead times, but better in the last 8–10 ones with respect to operational SPPT (similarly applied only to convection tendencies). The poorer skill in the early part of the forecast was possibly due to excessively large perturbations in the lower tendencies. In fact, the latter could be easily realized by taking a 0–1 K/day temperature tendency interval because the 0.34 K/day index of dispersion applied on the modified SPPT perturbations (i.e. the constant reference tendency from the legend in figure 1) has a higher value than the 0.21 K/day one from the coarse-graining results (table 1) as well as a higher value than the roughly calculated 0.16 K/day one from the operational SPPT (compare as well the black dashed, red solid and blue dotted lines in figure 1 in the same interval).
In this paper, the following algorithm for perturbing numerical forecast model parametrization tendencies has been designed to be consistent with the coarse-graining results of the previous section and differs from the operational tendency perturbation schemes used by Environment Canada and ECMWF (the aforementioned SPPT scheme) in their EPSs. Instead of assuming that the standard deviation of the parametrized tendency uncertainty is proportional to the mean tendency as those schemes do, it is assumed here that the variance of the uncertainty is proportional to the mean tendency. This can be achieved by scaling the random noise term in an autoregressive model of order 1 (AR1) process by the geometric mean of two tendencies: the actual model tendency at a grid point and a constant reference tendency determined from the coarse-graining results. However, the Craig–Cohen distribution is not used at this stage, primarily because we wish to retain the simplicity of an SPPT-like approach whilst improving its statistical properties. The scheme has been implemented in the ECMWF IFS for parametrized temperature tendencies only and the equations defining the scheme will be couched in terms of temperature tendencies.
An AR1 process is defined for every grid point by 3.2where is the perturbation temperature tendency at time step n; rn is a random number with zero mean and unit variance; ; Δt is the time step; τ is the temporal decorrelation time; is a reference temperature tendency (consistent with the previous section's coarse-graining results); is the parametrization temperature tendency in the forecast model at time step n; the smoothing factor S is given by 3.3where NT is the truncation wavenumber and where 3.4
S accounts for the reduction in standard deviation of a two-dimensional field of independent random numbers following Gaussian smoothing on the sphere. Here, rn is drawn from a uniform probability distribution but other choices are possible.
The ensemble-mean of the variance of is, asymptotically, given by 3.5and, to avoid spin up, is contrived to have approximately the same global variance by setting 3.6
Equation (3.2) is used at every grid-point and so the resulting field is very noisy. To impose spatial correlation scales, the global field is expanded into a spherical harmonic series and spectral smoothing is applied, i.e. 3.7where 3.8where m is the zonal wavenumber, is the associated Legendre function of degree n; λ and μ are longitude and sine of latitude, respectively; NT is the aforementioned upper bound on wavenumbers that define this triangular truncation; and are the harmonic coefficients that satisfy the reality condition 3.9where the asterisk denotes the complex conjugate. A smoothed perturbation tendency field is then defined by 3.10and it can be shown that with sufficient accuracy for our purposes 3.11which matches the expression used in §2 to fit curves for the uncertainty of the coarse-grained tendency versus mean tendency. Therefore, in principle, one can use the coarse-graining procedure to determine the magnitude of the noise term through the parameter. As such, would be a function of height at the very least and possibly a function of geographical location (e.g. land or sea). For convenience, it is assumed to be a constant here in the same way that the SPPT scheme assumes a height-independent proportionality constant between the standard deviation of the tendency perturbations and the mean (outside of the boundary layer and stratosphere). Note that in an extended scheme that perturbs humidity tendencies as well, the definition of (where q is the specific humidity) would be very strongly height-dependent because of the rapid decrease of q with height and also, potentially, a function of latitude.
Figure 3 shows the spatial distribution of the temperature increment at approximately 400 hPa generated by this scheme in a T159 forecast with a 1 h time step. As expected, at the 400 hPa level, convective heating dominates in the tropics and so the largest temperature tendency perturbations are found there.
(c) Tests in the ECMWF ensemble prediction system
The stochastic scheme defined above (labelled as stoch diab in figures 4 and 5) has been introduced into the ECMWF IFS and its influence on spread and skill in ECMWF EPS forecasts is assessed relative to three reference cases, i.e.
As the coarse-graining studies showed that temperature tendencies associated with radiation do not follow the variance-proportional-to-the-mean relationship, only the convective and large-scale condensation contributions to the temperature tendency are used to create perturbations. Unlike the SPPT scheme, only temperature is perturbed in these forecasts and for this and many other reasons the scheme falls short of competing with the operational set of stochastic parametrizations. Our aim is to merely to show some preliminary results using the scheme which, in contrast to the SPPT scheme, is based on the coarse-graining results rather than ad hoc tuning.
Fifty-one member ensemble forecasts are made at T399 horizontal resolution, with 91 levels and using cycle 38R2 of the ECMWF IFS. Initial perturbations are generated from singular vectors and the forecasts are run out to day 10. Scheme parameters are chosen to be: K/s, LR=500 km and τ=6 h, and applied in the same way on both convection and large-scale condensation parametrizations. Spread is defined as the RMS difference between the ensemble-mean and member forecasts and the skill is defined to be the RMSE of the ensemble mean with respect to ECMWF analyses. Probabilistic skill is represented here by the continuous ranked probability skill score (CRPSS). In the interests of brevity, results will only be shown for temperature at 850 hPa, averaged separately over the Northern Hemisphere extratropics and the tropics.
Figure 4a shows the time evolution of spread and skill of forecast temperature at 850 hPa (T850) averaged over the northern extratropics for the above-defined four cases. The curves are averages based on 21 ensemble forecasts in order to have enough significance, starting at 12Z on alternate days between 15 October and 24 November 2012 and running up to 10 days. All ensemble forecasts for T850 are underspread in the sense that the spread is less than the error of the ensemble mean. The increased spread created by the new stochastic forcing scheme described here is substantially less than the operational and SPPT only cases. This is to be expected since temperature tendency uncertainty is only addressed in parametrized convection and large-scale water phase changes, which means that a direct comparison between the new scheme and both SPPT cases would be inappropriate. Figure 4b shows the effect of these stochastic parametrizations on the CRPSS. The size of skill improvement is seen to be determined by the corresponding increase in spread with the new scheme making a significant contribution. In the tropics, the degree to which the ensemble forecasts are underspread is much greater (figure 5) and it can be seen that the new scheme, based quantitatively on the coarse-graining results, makes a useful impact on increasing spread and skill.
4. Summary and conclusion
The ECMWF IFS has been used to quantify parametrization uncertainty at the target T159 resolution using forecasts made at T1279 as a truth reference. Clearly, there is a deeper level of parametrization error that may well exist in both T159 and T1279 forecasts and so this definition of error is rather limited (e.g. multi-scale convection parametrization in the tropics and the subtle coupling to equatorial wave motion). So in some ways the error could be considered to be a lower bound of real parametrization uncertainties and because of the coarse-graining methodology followed here to be mainly due to horizontal resolution. Furthermore, rather than follow the usual ad hoc methodology used in developing stochastic parametrizations like SPPT  and stochastic kinetic energy backscatter , a new stochastic scheme has been developed here which is at least partially based on quantitative estimates of parametrization uncertainty and its dependence on parametrization strength.
The coarse-graining methodology employed involves comparing parametrization tendencies from sets of forecasts made at very different horizontal resolution after smoothing to a common resolution. Differences are interpreted as error at the target lower of the two resolutions and the PDF of this error is quantified as a function of tendency magnitude. It has been shown that the statistics of the error (or uncertainty) owing to convection and large-scale condensation parametrizations have some similarity to the Poisson distribution and even more so to the PDF derived by Craig & Cohen  in their statistical convection theory. An important finding is that the variance of the coarse-grained temperature tendencies in the truth high-resolution forecasts is proportional to the mean tendency in the target lower resolution forecasts when grid points are sampled based on narrow ranges of the latter. Current perturbed parametrization tendency schemes like SPPT, however, assume that the standard deviation of the tendencies is proportional to the mean tendency.
The new stochastic parametrization scheme proposed here introduces random noise at each grid point as an AR1 process by perturbing parametrization tendencies in proportion to the square root of their magnitude (making the tendency variance proportional to the mean tendency), and then smoothing the resulting perturbation field to a judiciously chosen scale (e.g. 500 km). Preliminary tests in the ECMWF EPS show the potential of the new scheme; yet, it is not competitive with the operational stochastic parametrization schemes in this form. To achieve that aim would involve extending the parametrized tendency perturbations to momentum and water vapour at least using coarse-graining results to guide the choice of reference tendencies and taking into account uncertainties on the whole set of parametrizations rather than only on convection and large-scale condensation ones.
A limitation of this study is that coarse-graining in the truth model is of parameterized convection tendencies rather than tendencies associated with explicitly resolved convection. Increasing computer power will soon make it possible to execute the coarse-graining procedure with a truth high-resolution model that does not require deep convection parametrization (this is certainly possible now for limited-area convection-permitting NWP models and the authors expect to follow this research line in the foreseeable future). When these data become available it may encourage those engaged in parametrization development to redesign their schemes to be stochastic in nature.
One contribution of 14 to a Theme Issue ‘Stochastic modelling and energy-efficient computing for weather and climate prediction’.
- © 2014 The Author(s) Published by the Royal Society. All rights reserved.