On modelling as collective intelligence
โ Scribed by Keith Beven
- Book ID
- 102266498
- Publisher
- John Wiley and Sons
- Year
- 2001
- Tongue
- English
- Weight
- 74 KB
- Volume
- 15
- Category
- Article
- ISSN
- 0885-6087
- DOI
- 10.1002/hyp.459
No coin nor oath required. For personal study only.
โฆ Synopsis
Attentive (and persevering) readers of these commentaries in HP
Today may have noticed a common theme running through some of my earlier contributions. This is perhaps best summarized as the idea that, given the limitations of our measuring techniques, the aim of defining a model of a catchment system that properly describes the processes actually governing the catchment response is essentially unattainable. We are then left with an equifinality problem: that many different models or parameter sets might be consistent with the available data or, at least, provide simulations that are in some sense acceptable for the purposes of a particular application. Any model that is chosen as acceptable will therefore be wrong in detail, but is being used as a tool to extrapolate knowledge in time and/or space beyond the available measurements. Even the most complex physically based models are extrapolation tools in this sense. The hope is that, if we use the right theory, the extrapolation will be in some sense accurate.
Accuracy in extrapolation, however, depends not only upon our assumptions (whether or not we do indeed have the 'right' theory) but also on the boundary conditions and other information required in setting up the model runs. This is especially true when extrapolating outside the range of data available for model calibration or conditioning. Even a simple statistical regression analogy tells us that, even when we assume we can find a true model, as we move further away from the data points the standard errors of estimation get larger and larger. If we also need to change the nature of the boundary conditions, the uncertainties will be larger still.
Thus it would be useful if models could have certain assumptions that are collectively agreed upon as acceptable for use in this way within the common aim of hydrological science of finding the true description of a hydrological reality. So why then do we have so many models? Some, it is true, are different implementations of the same equations, but many are based on quite differing assumptions (see Beven ( 2001)). It would seem that we have so many different models because they all pass some evaluation tests. In the statistical regression analogy this has been nicely illustrated by David Draper (1995), who fits a number of different regression models to a single data set and shows that they lead to very different predictions when extrapolated outside the range of the data. The models were all significant statistically, so that they were all contenders for the 'true' model. The extrapolations were rather important: they were predicting the failure of the O-ring seals on the Challenger space shuttle (although Draper's
๐ SIMILAR VOLUMES