𝔖 Bobbio Scriptorium
✦   LIBER   ✦

A Local Supervised Learning Algorithm For Multi-Layer Perceptrons

✍ Scribed by D. S. Vlachos


Publisher
John Wiley and Sons
Year
2004
Weight
114 KB
Volume
1
Category
Article
ISSN
1611-8170

No coin nor oath required. For personal study only.

✦ Synopsis


Abstract

The back propagation of error in multi‐layer perceptrons when used for supervised training is a non‐local algorithm in space, that is it needs the knowledge of the network topology. On the other hand, learning rules in biological systems with many hidden units, seem to be local in both space and time. In this work, a local learning algorithm is proposed which makes no distinction between input, hidden and output layers. Simulation results are presented and compared with other well known training algorithms. (Β© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)


πŸ“œ SIMILAR VOLUMES


A scaled conjugate gradient algorithm fo
✍ Martin Fodslette MΓΈller πŸ“‚ Article πŸ“… 1993 πŸ› Elsevier Science 🌐 English βš– 771 KB

A supervised learning algorithm (Scaled Conjugate Gradient, SCG) is introduced TIw pelformance of SCG is benchmarked against that of the standard back propagation algorithm (BP) ( Rumelhart. Hinton. & 14"illiams. 1986 ), the conjugate gradient algorithm with line search ( CGL ) ( Johansson, Dowla. &

Fast generating algorithm for a general
✍ R. Zollner; H.J. Schmitz; F. WΓΌnsch; U. Krey πŸ“‚ Article πŸ“… 1992 πŸ› Elsevier Science 🌐 English βš– 513 KB

A fast iterative algorithm is proposed for the construction and the learning of a neural net achieving a classification task, with an input layer, one intermediate layer, and an output layer. The network is able to learn an arbitrary training set. The algorithm does not depend on a special learning

An optimal parallel perceptron learning
✍ Tzung-Pei Hong; Shian-Shyong Tseng πŸ“‚ Article πŸ“… 1994 πŸ› Elsevier Science 🌐 English βš– 211 KB

In [2], a parallel perceptron learning algorithm on the single-channel broadcast communication model was proposed to speed up the learning of weights of perceptrons [3]. The results in [2] showed that given n training examples, the average speedup is 1.48\*n~ n by n processors. Here, we explain how