This paper develops a model to forecast the likely quality rating of new automobiles. A scheme is devised and implemented for 1982 whereby the probability that a specific model will have one of five quality ratings is computed. The quality ratings are based on the trouble index computed by Consumer
Forecasting quality and information
β Scribed by Klaus Brockhoff
- Publisher
- John Wiley and Sons
- Year
- 1984
- Tongue
- English
- Weight
- 775 KB
- Volume
- 3
- Category
- Article
- ISSN
- 0277-6693
No coin nor oath required. For personal study only.
β¦ Synopsis
It is assumed that demand for information that subjectively appears to be relevant for forecasting improves forecasting quality. To study this hypothesis a number of forecasting experiments were conducted. Fifty managers from the housing business, from banking, and from a research institution were asked to forecast interest rates, using a Delphi process. They communicated via a computer system, and, to support their judgements, they had access to a data bank that was stored in the same system. Their communication with the system was automatically recorded. Part of the data collected in these experiments is used to study the existence of a relationship between information activities and forecasting results. A weak positive relationship is found if non-linear functions are tested, where information demand is corrected by those data retrievals that seem to have resulted from an inability to handle the information system. For further research a more general, albeit less informative, Boolean model is suggested.
KEY WORDS Forecasting quality Information demand Delphi groups
Computer dialogue Interest rate forecasting Practitioners often assume that decisions and forecasts may be improved by increasing the amount of information to be incorporated into these processes. On the contrary, some forecasts may suffer from information overload, as has been shown in a number of empirical studies. This should induce selective screening for 'relevant' information. This appears to be reasonable, as complete, piecewise processing of all incoming messages may consume too much time, effort and resources to put the relevant bits of the information to their use. Theoretical studies on the optimal level of information have been based primarily on Bayesian analysis to study the consequences of new information, and on an explicit model of information usage to evaluate its consequences. This approach is not discussed here, as it has been shown that people have difficulties in correctly estimating subjective probabilities (Hogarth and Makridakis (198 1) present a review ofjudgemental biases).
In the following section three approaches are mentioned that have been taken in the past with respect to empirical explanations for the use of information. Then, a laboratory experiment that served to generate the data to test one of these approaches is described. After that a number of hypothesis are presented and tested. The final section suggests further research.
π SIMILAR VOLUMES
## Abstract Using the method of ARIMA forecasting with benchmarks developed in this paper, it is possible to obtain forecasts which take into account the historical information of a series, captured by an ARIMA model (Box and Jenkins, 1970), as well as partial prior information about the forecasts.
Often a forecaster has supplementary information (e.g. field reports or forecasts from another source) that cannot be included directly in a time series model. Especially interesting are cases where this information is given at time intervals that are different from those of the time series model fo
## Abstract The development of mesoβΞ³ scale numerical weather prediction (NWP) models requires a substantial investment in research, development and computational resources. Traditional objective verification of deterministic model output fails to demonstrate the added value of highβresolution fore