๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

A new scheme for training feed-forward neural networks

โœ Scribed by Osama Abdel-Wahhab; M.A. Sid-Ahmed


Publisher
Elsevier Science
Year
1997
Tongue
English
Weight
410 KB
Volume
30
Category
Article
ISSN
0031-3203

No coin nor oath required. For personal study only.

โœฆ Synopsis


In this paper we present a new algorithm, which is orders of magnitude faster than the delta rule, for training feed-forward neural networks. It provides a substantial improvement over the method of Scalero and Tepedelenlioglu (IEEE Trans. Signal Process. 40(1) (1992)) in both training time and numerical stability. The method combines the modified back-propagation algorithm described by Scalero and Tepedelenlioglu along with a faster training scheme and has better numerical stability. The algorithm is tested against other methods, and results are presented.


๐Ÿ“œ SIMILAR VOLUMES


Hybrid learning schemes for fast trainin
โœ Nicolaos B. Karayiannis ๐Ÿ“‚ Article ๐Ÿ“… 1996 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 905 KB

Fast training of feed-forward neural networks became increasingly important as the neural network field moves toward maturity. This paper begins with a review of various criteria proposed for training feed-forward neural networks, which include the frequently used quadratic error criterion, the rela

Invariance priors for Bayesian feed-forw
โœ Udo v. Toussaint; Silvio Gori; Volker Dose ๐Ÿ“‚ Article ๐Ÿ“… 2006 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 1001 KB

Neural networks (NN) are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand, this flexibility can cause overfitting and can hamper the generalization of neural networks. Many approaches to regularizing NN have been