๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

Weight decay backpropagation for noisy data

โœ Scribed by Amit Gupta; Siuwa M. Lam


Publisher
Elsevier Science
Year
1998
Tongue
English
Weight
283 KB
Volume
11
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

โœฆ Synopsis


We present a formal evaluation of the effect of weight decay training for backpropagation on noisy data sets. Weight decay training is suggested as an implementation for achieving a robust neural network which is insensitive to noise. We investigate the noisy situations of noisy training set-clean test set, clean training set-noisy test set, and noisy training set-noisy test set. Statistically speaking, there is strong evidence indicating that the noisy situation of noisy training set-clean test set provides more accurate prediction than the other two noisy situations. This finding suggests the relative importance of maintaining to-be-predicted cases noise-free for neural network classification. Furthermore, experimental results show that weight decay training is at least as good as standard backpropagation in noisy situations and, in some data sets, weight decay training outperforms standard backpropagation by a significant difference. However, for clean data sets there is no significant difference between weight decay training and standard backpropagation. Another interesting finding in this study is the effect of the number of training epochs on weight decay training and standard backpropagation in noisy situations. Weight decay training can achieve convergence after a short training. For the same short training, weight decay training usually outperforms standard backpropagation. When additional training has a significant effect on performance, it is to improve standard backpropagation but deteriorate weight decay training.


๐Ÿ“œ SIMILAR VOLUMES