๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

MEASURING AND IMPROVING NEURAL NETWORK GENERALIZATION FOR MODEL UPDATING

โœ Scribed by R.I. LEVIN; N.A.J. LIEVEN; M.H. LOWENBERG


Publisher
Elsevier Science
Year
2000
Tongue
English
Weight
316 KB
Volume
238
Category
Article
ISSN
0022-460X

No coin nor oath required. For personal study only.

โœฆ Synopsis


This paper compares various techniques of measuring the generalization ability of a neural network used for model-updating purposes. An appropriate metric for measuring generalization ability is suggested, and it is used to investigate and compare various neural network architectures and training algorithms. The e!ect of noise on generalization ability is considered, and it is shown that the form of the noise does not appear important to the networks. This implies that the optimum training location may be obtained by considering a simple noise model such as Gaussian noise. Various radial basis function neurons and training algorithms are considered. Signi"cant improvements to generalization ability are noted by merging the holdout and training data sets before training the second layer of the network, after the network architecture has been decided. The Gaussian radial basis function is rejected as the radial basis function of choice, due to uncertainty regarding an appropriate value for the spread constant. It is noted that several alternative radial basis functions without spread constants, such as the thin-plate spline, give excellent results. Finally, the use of jitter and committees to improve the generalization ability of networks is considered. It is found that jitter makes neither improvement nor degrades the results. It is also found that a committee of networks performs better than any single network. A good method of generating committee members is to split the available data evenly into multiple random holdout and training data sets.

2000 Academic Press


๐Ÿ“œ SIMILAR VOLUMES


SELECTION OF TRAINING SAMPLES FOR MODEL
โœ C.C. CHANG; T.Y.P. CHANG; Y.G. XU; W.M. TO ๐Ÿ“‚ Article ๐Ÿ“… 2002 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 362 KB

One unique feature of neural networks is that they have to be trained to function. In developing an iterative neural network technique for model updating of structures, it has been shown that the number of training samples required increases exponentially as the number of parameters to be updated in

Structure selective updating for nonline
โœ W. Luo; S. A. Billings ๐Ÿ“‚ Article ๐Ÿ“… 1998 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 242 KB ๐Ÿ‘ 2 views

Selective model structure and parameter updating algorithms are introduced for both the online estimation of NARMAX models and training of radial basis function neural networks. Techniques for on-line model modification, which depend on the vector-shift properties of regression variables in linear m

Conceptual fuzzy neural network model fo
โœ Paulo Chaves; Toshiharu Kojiri ๐Ÿ“‚ Article ๐Ÿ“… 2007 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 474 KB

## Abstract Artificial neural networks (ANNs) have been applied successfully in various fields. However, ANN models depend on large sets of historical data, and are of limited use when only vague and uncertain information is available, which leads to difficulties in defining the model architecture