A supervised learning algorithm (Scaled Conjugate Gradient, SCG) is introduced TIw pelformance of SCG is benchmarked against that of the standard back propagation algorithm (BP) ( Rumelhart. Hinton. & 14"illiams. 1986 ), the conjugate gradient algorithm with line search ( CGL ) ( Johansson, Dowla. &
A Local Supervised Learning Algorithm For Multi-Layer Perceptrons
β Scribed by D. S. Vlachos
- Publisher
- John Wiley and Sons
- Year
- 2004
- Weight
- 114 KB
- Volume
- 1
- Category
- Article
- ISSN
- 1611-8170
No coin nor oath required. For personal study only.
β¦ Synopsis
Abstract
The back propagation of error in multiβlayer perceptrons when used for supervised training is a nonβlocal algorithm in space, that is it needs the knowledge of the network topology. On the other hand, learning rules in biological systems with many hidden units, seem to be local in both space and time. In this work, a local learning algorithm is proposed which makes no distinction between input, hidden and output layers. Simulation results are presented and compared with other well known training algorithms. (Β© 2004 WILEYβVCH Verlag GmbH & Co. KGaA, Weinheim)
π SIMILAR VOLUMES
A fast iterative algorithm is proposed for the construction and the learning of a neural net achieving a classification task, with an input layer, one intermediate layer, and an output layer. The network is able to learn an arbitrary training set. The algorithm does not depend on a special learning
In [2], a parallel perceptron learning algorithm on the single-channel broadcast communication model was proposed to speed up the learning of weights of perceptrons [3]. The results in [2] showed that given n training examples, the average speedup is 1.48\*n~ n by n processors. Here, we explain how