𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Principal components, minor components, and linear neural networks

✍ Scribed by Erkki Oja


Publisher
Elsevier Science
Year
1992
Tongue
English
Weight
731 KB
Volume
5
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

✦ Synopsis


Many neural network realizations have been recently proposed for the statistical technique of Principal

Component Analysis ( PCA ). Explicit connections between numerical constrained adaptive algorithms and neural networks with constrained Hebbian learning rules art, reviewed. The Stochastic Gradient Ascent ( SGA ) neural net work is proposed and shown to be closely related to the Generalized Hebbian Algorithm ( GHA ). The SGA behaves better for extracting the less dominant eigenvectors. The SGA algorithm is fitrther extended to the case of learning minor components. The symmetrical Subspace Network is known to give a rotated basis of the dominant eigenvector subspace, but usuall.v not the true eigenvectors themselves. Two extensions are proposed: in the first one, each neuron has a scalar parameter which breaks the symmetry. True eigenvectors are obtained in a local and fidly parallel learning rule. In the second one, the case of an arbitrary number of parallel neurons is considered, not necessarily less than the input vector dimension.


πŸ“œ SIMILAR VOLUMES


Feedforward neural networks for principa
✍ Sandro Nicole πŸ“‚ Article πŸ“… 2000 πŸ› Elsevier Science 🌐 English βš– 233 KB

In recent times, an upsurge of interest in the study of artiΓΏcial neural networks apt to computing a principal component extraction has been observed. The present work is devoted to the description and performance analysis (by means of computer simulations) of some neural networks of such a kind. Th