Artificial neural networks--especially those using the error bach" propagation algorithm--are capable of learning to control an unknown plant by atttonomousl), extracting the necessary information from the plant. Following the approach of Psaltis, Sideris, and Yamamura, and Saerens and Soqttel, a co
An adaptive training algorithm for back propagation networks
โ Scribed by L.-W. Chan; F. Fallside
- Publisher
- Elsevier Science
- Year
- 1987
- Tongue
- English
- Weight
- 628 KB
- Volume
- 2
- Category
- Article
- ISSN
- 0885-2308
No coin nor oath required. For personal study only.
โฆ Synopsis
The effect of the coefficients used in the conventional back propagation algorithm on training connectionist models is discussed, using a vowel recognition task in speech processing as an example. Some weaknesses of the use of fixed coefficients are described and an adaptive algorithm using variable coefficients is presented. This is found to be efficient and robust in comparison with the fixed parameter case, to give fast near optimal training and to avoid trial and error choice of fixed coefficients. It has also been successfully used in a vision processing application.
๐ SIMILAR VOLUMES
The multipletraining concept first appliedto Bidirectional Associative Memory trainingis appliedto the back-propagation (BP) algorithm for use in associative memories. This new algorithm. which assigns different weights to the various pairsin the energyfunction, is calledmultiple training back-propa
In this paper a method for adapting the stepsize in on-line network training is presented. The proposed technique derives from the stochastic gradient descent proposed by Almeida et al. [On-line Learning in Neural Networks, 111-134, Cambridge University Press, 1998]. The new aspect of our approach c
Two main problems for the neural network (NN) paradigm are discussed: the output value interpretation and the symbolic content of the connection matrix. In this article, we construct a solution for a very common architecture of pattern associators: the backpropagation networks. First, we show how Za