The development of ensemble-based ‘probabilistic’ climate forecasts is often seen as a promising avenue for climate scientists. Ensemble-based methods allow scientists to produce more informative, nuanced forecasts of climate variables by reflecting uncertainty from various sources, such as similarity to observation and model uncertainty. However, these developments present challenges as well as opportunities, particularly surrounding issues of experimental design and interpretation of forecast results. This paper discusses different approaches and attempts to set out what climateprediction.net and other large ensemble, complex model experiments might contribute to this research programme.
One contribution of 13 to a Theme Issue ‘Ensembles and probabilities: a new era in the prediction of climate change’.
↵Oreskes et al. (1994) point out that the processes scientists usually regard as ‘verification’ are actually forms of ‘confirmation’. Stainforth et al. (2007a) adopt this terminology; we stick to the phrasing most familiar to climate scientists.
↵Note that this is not the problem of different theories offering the same predictions (see Papineau (1996) and essays therein). In this case, the predictions differ; but we cannot choose between them.
↵We may still be wrong, for Humean reasons. Passengers on various de Havilland Comets in the early 1950s were wrong in their risk assessments because the models employed by de Havilland designers did not include sufficient representation of metal fatigue.
↵A ‘uniform prior’ essentially amounts to an evenly sampled likelihood weighting in which each model is given equal weight before the data are applied. Its simplicity has proved of enduring appeal, but Rougier (2006), for instance, gives some reasons why it might not be an attractive choice as a belief function.
↵Something like ‘consensus intersubjective estimate’ would seem an accurate, if unwieldy, description.
↵In fact, perhaps the surest way for the man to lose money on his bets would be to treat the model as structurally perfect, subject only to parametric error, and fail to even try to account for the fact that the real world climatology is not replicated in his ensemble. Say the model has a systematic bias that means its Asian monsoon is too weak. This is true across the entire ensemble. If he perturbs parameters in his model, applies his preferred prior and then attempts to place bets on the strength of the Asian monsoon, knowing that the model is in error there but not accounting for it, then he is going to lose in the long run against someone who makes some reasonable attempt to account for systematic bias.
↵It is tempting, but ambiguous, to refer to the ‘reality’ of the lacuna.
- © 2007 The Royal Society