Training neural net classifier to improve generalization capability
β Scribed by Masahiro Kayama; Shigeo Abe
- Book ID
- 104591621
- Publisher
- John Wiley and Sons
- Year
- 1994
- Tongue
- English
- Weight
- 808 KB
- Volume
- 25
- Category
- Article
- ISSN
- 0882-1666
No coin nor oath required. For personal study only.
β¦ Synopsis
Abstract
A training method for neural net classifiers is discussed from the viewpoint of improving their generalization capability. First, the conventional training method which minimizes the square sum of network output errors from training outputs is shown to be inappropriate. This is because category boundaries may be close to some specific clusters which decreases the generalization capability of the network. Then to obtain impartial boundaries to all clusters, a new method is proposed which adds appropriate random numbers to the training inputs and decreases their amplitude to zero as training proceeds. The effectiveness of this method is demonstrated by simulations of alphabet and practical number recognition systems. The proposed method is useful, especially when the quality and quantity of training data are not sufficient. This is particular for cases where a classification system should be constructed quickly with a few representative training data or considerable time or money is required to obtain voluminous training data.
π SIMILAR VOLUMES
An algorithm is derived for supervised training in mtdtilayer feedforwardneural networks. Relative to the gradient descent backpropagation algorithm it appears to give bothfaster convergence and improved generalization, whilst preserving the system of backpropagating errors throughthe network.
This paper describes a fast training algorithm for feedforward neural nets, as applied to a two-layer neural network to classify segments of speech as voiced, unvoiced, or silence. The speech classification method is based on five features computed for each speech segment and used as input to the ne