𝔖 Bobbio Scriptorium
✦   LIBER   ✦

A learning algorithm for multilayered neural networks based on linear least squares problems

✍ Scribed by Friedrich Biegler-König; Frank Bärmann


Publisher
Elsevier Science
Year
1993
Tongue
English
Weight
365 KB
Volume
6
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

✦ Synopsis


An algorithm ./or the training of mtdtilayered neural networks solely based on linear algebraic methods is presented. Its convergence speed up to a certain limit t~flearning accura~3' is orders o./magnitude better than that of the classical back propagation. Furthermore. its learning aptitude increases with the number of internal nodes in the network ( contrar.i, to backprop ). Especially (['the network inchtdes a hidden layer u'ith more nodes than the nztmber qf examples to be learned and (f the mtmber of nodes in sttcceeding la.l'er.v decreases monotonically the presented algorithm in general finds an exact solution.


📜 SIMILAR VOLUMES


Construction of confidence intervals for
✍ I. Rivals; L. Personnaz 📂 Article 📅 2000 🏛 Elsevier Science 🌐 English ⚖ 424 KB

We present the theoretical results about the construction of confidence intervals for a nonlinear regression based on least squares estimation and using the linear Taylor expansion of the nonlinear model output. We stress the assumptions on which these results are based, in order to derive an approp

A second-order learning algorithm for mu
✍ Yi-Jen Wang; Chin-Teng Lin 📂 Article 📅 1998 🏛 Elsevier Science 🌐 English ⚖ 392 KB

This article proposes a new second-order learning algorithm for training the multilayer perceptron (MLP) networks. The proposed algorithm is a revised Newton's method. A forward-backward propagation scheme is first proposed for network computation of the Hessian matrix, H, of the output error functi

A genetic algorithm with deterministic m
✍ Minoru Fukumi; Norio Akamatsu 📂 Article 📅 1998 🏛 John Wiley and Sons 🌐 English ⚖ 189 KB 👁 2 views

This paper presents a method for designing neural networks using a genetic algorithm (GA) with deterministic mutation (DM) based on learning. The GA presented in this paper has a large framework including DM, which is performed on the basis of the results from neural network learning. It can achieve

On a class of efficient learning algorit
✍ Frank Bärmann; Friedrich Biegler-König 📂 Article 📅 1992 🏛 Elsevier Science 🌐 English ⚖ 457 KB

The ability of a neural network with one hidden layer to accurately learn a specified learning set increases with the number of nodes in the hidden layer; if a network has exactly the same number of internal nodes as the number of examples to be learnt, it is theoretically able to learn these exampl