In his biography of the great twentieth century theoretical physicist Richard Feynman, James Gleick writes :
‘He [Feynman] believed in the primacy of doubt, not as a blemish on our ability to know, but as the essence of knowing.’
That is to say, Feynman recognized that doubt, or uncertainty, is an inherent part of the scientific method, and therefore an inherent element of scientific prediction. By contrast, predictions that have little or no basis in science can often be recognized as such, by a complete failure to acknowledge the existence of uncertainty.
Despite this key role of uncertainty in science, there have been few meetings where working scientists from different disciplines have come together to discuss and compare the methods by which they handle and communicate uncertainty. The Discussion Meeting at the Royal Society on 22–23 March 2010 provided a unique opportunity for such a discussion and comparison. Indeed, this may have been one of the most interdisciplinary Royal Society Discussion Meetings for quite some time, with participation from leaders in the fields of theoretical physics, statistics, philosophy, mathematics, cosmology, health, economics, climate, as well as from the media, government and business.
At the time the meeting was being held, the world was in the grip of what was collectively called ‘climategate’, a number of events (such as an e-mail hack at the University of East Anglia) which at face value appeared to undermine the science of climate change. Some might have imagined that this Discussion Meeting provided climate scientists with an opportunity to admit that the science of climate change was not as certain as had been claimed, for example, in the Intergovernmental Panel on Climate Change (IPCC) assessment reports. In fact, quite the reverse—following the work of one of the pioneers of chaos theory, meteorologist Ed Lorenz, meteorologists over the last decade or so have been pioneering techniques to predict uncertainty in the evolution of complex nonlinear systems, and this work is in fact well described in the IPCC assessment reports. In truth, the Discussion Meeting, conceived and organized many months before ‘climategate’ broke onto the world stage, provided an opportunity to showcase the techniques used to predict uncertainty in weather and climate prediction.
Within classical physics, with their deterministic and precisely known laws, the evolution of many systems is nevertheless uncertain because these laws are chaotic—weather prediction is but one example. For other systems, such as the economy, we have no underlying ‘laws’, and, additionally, the predictions can themselves influence the system about which these predictions are made.1 However, despite these differences, economists and meteorologists can learn from one another in the tools they use for estimating and communicating uncertainty in their predictions. A fan chart showing an ensemble weather prediction of temperature over London in the coming days, or of the El Niño event in the coming months (figure 1), or the Bank of England's probabilistic prediction of the inflation rate or gross domestic product (figure 2) are quite similar in concept and are not completely different in analytical background. The sort of problems inherent in predicting the probability of a systemic breakdown in the economy, or of the outbreak of life-threatening pandemic flu, have parallels with those in predicting the likelihood of exceptionally damaging, but otherwise inherently rare, weather events.
Uncertainty is not only inherent in predicting chaotic classical systems, it also appears to be basic to our descriptions of the fundamental laws of physics—as described in the measurement axiom of standard quantum mechanics. Because of this, the conventional view is that Einstein was wrong to believe in the existence of an objectively certain ‘reality’ at the deepest level. However, as was discussed at the meeting, the matter is far from settled, despite quantum theory's unquestioned success in describing the world around us. One cannot help wondering whether our inability to understand fully the origin of uncertainty at this most basic level has ramifications elsewhere, for example, in unifying quantum theory with Einstein's theory of general relativity.
Communicating uncertainty outside the scientific arena is an important issue. Some view the notion of probabilistic prediction as too complex for the public to understand. However, the public knows not only that a horse rated at 100-1 against is less likely to win than a horse rated as 2-1 on, but also that the latter is itself not a racing certainty to win. Hence they understand that if one loses a thousand pounds betting on a horse rated as 2-1 on, there is little point complaining to the bookies. Similarly, a forecast of 66 per cent chance of a ‘BBQ summer’ does not guarantee that it will happen. So why did the Met Office come in for such ridicule in the press, when making a probabilistic prediction for the summer of 2009? Perhaps, it is the media rather than the public that does not understand (or maybe chooses not to understand) these matters.
In fact, if the public were more exposed to weather prediction as inherently probabilistic, perhaps there would be more acceptance of the simple fact that one's view about the risk of dangerous climate change should not be framed in the black and white terms of ‘belief’ and ‘scepticism’. For one thing, belief should have no real role in science, and, in truth, all good scientists are inherently sceptical people.
However, we can go further than this. Let us accept that we cannot be certain whether or not anthropogenic emissions of greenhouse gases will lead to unquestionably dangerous changes to climate in the next century. However, consider the question: how probable would such dangerous climate change have to be for it to warrant some mitigating action now, to limit anthropogenic emissions? By ‘unquestionably dangerous’, we could mean the complete loss of the Amazonian rainforest owing to shifting rain patterns, or of large parts of Bangladesh becoming uninhabitable owing to persistently intense monsoons, storm surges and substantial sea-level rise, or of permanent Sahelian drought of the type seen in the 1980s? How probable before taking mitigating action can be justified: 50, 10, 1 or 0.1 per cent?
This brings us to the second part of the problem: making decisions in the light of uncertain scientific input. Better decisions can be made using predictions that have a properly quantified estimate of uncertainty, than using over-confident predictions with no estimate of uncertainty. But decisions can only be made easily if one can value the different probabilistic alternatives. In many situations, this may be a relatively simple economic matter. If wind speed exceeds 20 m s−1, a wind turbine typically cannot operate safely and therefore must be shut down. A probabilistic forecast of wind, including the probability that the wind speed will exceed 20 m s−1, can be converted into a rational and objective decision into how much electricity a wind farm should contract to produce (bearing in mind that the costs of buying electricity on the spot market should wind speeds exceed 20 m s−1, can be substantial compared with profits when wind speeds are within the operating range). But how do we value the loss of the Amazonian rainforest, or of large parts of Bangladesh, or of prolonged Sahelian drought? Clearly, it is not a purely economic calculation, but involves issues more directly related to human suffering and the destruction of things that we hold intrinsically dear to us. Estimating value in this generalized sense, including the thorny (and ultimately ethical) issue of whether the suffering of future generations should be somehow discounted, is clearly an extremely challenging issue for all of us. Nevertheless, these challenges should not deflect scientists and governments alike from ensuring that we are doing all that is humanly possible; firstly, to estimate uncertainties in future climate change as accurately as possible, and secondly to reduce these uncertainties—a large element of which lies in improving the computational representations of the equations of climate—wherever we can.
Whether it is in guiding efforts to progress our knowledge and understanding of science, or in developing procedures for rational decision making, or in communicating our science to others, there is much that scientists can learn from one another in developing methodologies to handle uncertainty. It is therefore important that, from time to time, scientists come together to communicate across their disciplinary boundaries, and share experience and expertise in handling this key aspect of the scientific method. The 2010 Discussion Meeting provided an unrivalled opportunity to do just this. It is hoped that more will follow.
One contribution of 15 to a Discussion Meeting Issue ‘Handling uncertainty in science’.
↵1 It is sometimes said that the difference between a weather forecast and an economic forecast is that a weather forecast cannot change the weather, whereas an economic forecast can change the economy. Matters are less clear cut these days, where a climate-change forecast can in principle change societal carbon footprints, and hence climate.
- This journal is © 2011 The Royal Society