๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

An adaptive training algorithm for back propagation networks

โœ Scribed by L.-W. Chan; F. Fallside


Publisher
Elsevier Science
Year
1987
Tongue
English
Weight
628 KB
Volume
2
Category
Article
ISSN
0885-2308

No coin nor oath required. For personal study only.

โœฆ Synopsis


The effect of the coefficients used in the conventional back propagation algorithm on training connectionist models is discussed, using a vowel recognition task in speech processing as an example. Some weaknesses of the use of fixed coefficients are described and an adaptive algorithm using variable coefficients is presented. This is found to be efficient and robust in comparison with the fixed parameter case, to give fast near optimal training and to avoid trial and error choice of fixed coefficients. It has also been successfully used in a vision processing application.


๐Ÿ“œ SIMILAR VOLUMES


Adaptive control of dynamic systems by b
โœ Wolfram H. Schiffmann; H. Willi Geffers ๐Ÿ“‚ Article ๐Ÿ“… 1993 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 624 KB

Artificial neural networks--especially those using the error bach" propagation algorithm--are capable of learning to control an unknown plant by atttonomousl), extracting the necessary information from the plant. Following the approach of Psaltis, Sideris, and Yamamura, and Saerens and Soqttel, a co

Multiple training concept for back-propa
โœ Yeou-Fang Wang; Jose B. Cruz Jr.; J.H. Mulligan Jr. ๐Ÿ“‚ Article ๐Ÿ“… 1993 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 517 KB

The multipletraining concept first appliedto Bidirectional Associative Memory trainingis appliedto the back-propagation (BP) algorithm for use in associative memories. This new algorithm. which assigns different weights to the various pairsin the energyfunction, is calledmultiple training back-propa

Adaptive stepsize algorithms for on-line
โœ G.D. Magoulas; V.P. Plagianakos; M.N. Vrahatis ๐Ÿ“‚ Article ๐Ÿ“… 2001 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 312 KB

In this paper a method for adapting the stepsize in on-line network training is presented. The proposed technique derives from the stochastic gradient descent proposed by Almeida et al. [On-line Learning in Neural Networks, 111-134, Cambridge University Press, 1998]. The new aspect of our approach c

A symbolic interpretation for back-propa
โœ P. Magrez; A. Rousseau ๐Ÿ“‚ Article ๐Ÿ“… 1992 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 974 KB

Two main problems for the neural network (NN) paradigm are discussed: the output value interpretation and the symbolic content of the connection matrix. In this article, we construct a solution for a very common architecture of pattern associators: the backpropagation networks. First, we show how Za