A study on the applicability of different kinds of neural networks for the probabilistic analysis of structures, when the sources of randomness can be modeled as random variables, is summarized. The networks are employed as numerical devices for substituting the finite element code needed by Monte C
A comparative study of autoregressive neural network hybrids
β Scribed by Tugba Taskaya-Temizel; Matthew C. Casey
- Publisher
- Elsevier Science
- Year
- 2005
- Tongue
- English
- Weight
- 276 KB
- Volume
- 18
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
β¦ Synopsis
Many researchers have argued that combining many models for forecasting gives better estimates than single time series models. For example, a hybrid architecture comprising an autoregressive integrated moving average model (ARIMA) and a neural network is a well-known technique that has recently been shown to give better forecasts by taking advantage of each model's capabilities. However, this assumption carries the danger of underestimating the relationship between the model's linear and non-linear components, particularly by assuming that individual forecasting techniques are appropriate, say, for modeling the residuals. In this paper, we show that such combinations do not necessarily outperform individual forecasts. On the contrary, we show that the combined forecast can underperform significantly compared to its constituents' performances. We demonstrate this using nine data sets, autoregressive linear and time-delay neural network models.
π SIMILAR VOLUMES
The Hopfield neural network is a mathematical model in which each neuron performs a threshold logic function. An important property of the model is that a neural network always converges to a stable state when operating in a serial mode. This property is the basis of potential applications of neural