A learning algorithm for multilayered neural networks based on linear least squares problems
✍ Scribed by Friedrich Biegler-König; Frank Bärmann
- Publisher
- Elsevier Science
- Year
- 1993
- Tongue
- English
- Weight
- 365 KB
- Volume
- 6
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
✦ Synopsis
An algorithm ./or the training of mtdtilayered neural networks solely based on linear algebraic methods is presented. Its convergence speed up to a certain limit t~flearning accura~3' is orders o./magnitude better than that of the classical back propagation. Furthermore. its learning aptitude increases with the number of internal nodes in the network ( contrar.i, to backprop ). Especially (['the network inchtdes a hidden layer u'ith more nodes than the nztmber qf examples to be learned and (f the mtmber of nodes in sttcceeding la.l'er.v decreases monotonically the presented algorithm in general finds an exact solution.
📜 SIMILAR VOLUMES
We present the theoretical results about the construction of confidence intervals for a nonlinear regression based on least squares estimation and using the linear Taylor expansion of the nonlinear model output. We stress the assumptions on which these results are based, in order to derive an approp
This article proposes a new second-order learning algorithm for training the multilayer perceptron (MLP) networks. The proposed algorithm is a revised Newton's method. A forward-backward propagation scheme is first proposed for network computation of the Hessian matrix, H, of the output error functi
This paper presents a method for designing neural networks using a genetic algorithm (GA) with deterministic mutation (DM) based on learning. The GA presented in this paper has a large framework including DM, which is performed on the basis of the results from neural network learning. It can achieve
The ability of a neural network with one hidden layer to accurately learn a specified learning set increases with the number of nodes in the hidden layer; if a network has exactly the same number of internal nodes as the number of examples to be learnt, it is theoretically able to learn these exampl