𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Adaptive stepsize algorithms for on-line training of neural networks

✍ Scribed by G.D. Magoulas; V.P. Plagianakos; M.N. Vrahatis


Publisher
Elsevier Science
Year
2001
Tongue
English
Weight
312 KB
Volume
47
Category
Article
ISSN
0362-546X

No coin nor oath required. For personal study only.

✦ Synopsis


In this paper a method for adapting the stepsize in on-line network training is presented. The proposed technique derives from the stochastic gradient descent proposed by Almeida et al. [On-line Learning in Neural Networks, 111-134, Cambridge University Press, 1998]. The new aspect of our approach consists in taking into consideration previously computed pieces of information regarding the adaptation of the stepsize. The proposed algorithm has been implemented, tested and compared against other on-line methods in three problems. The results shown that it behaves predictably and reliably, and possesses a satisfactory average performance.


πŸ“œ SIMILAR VOLUMES


A parallel algorithm for gradient traini
✍ ZdenΔ›k HanzΓ‘lek πŸ“‚ Article πŸ“… 1998 πŸ› Elsevier Science 🌐 English βš– 742 KB

This paper presents a message-passing architecture simulating multilayer neural networks, adjusting its weights for each pair, consisting of an input vector and a desired output vector. First, the multilayer neural network is defined, and the difficulties arising from parallel implementation are cla

Real-coded chaotic quantum-inspired gene
✍ Shuanfeng Zhao; Guanghua Xu; Tangfei Tao; Lin Liang πŸ“‚ Article πŸ“… 2009 πŸ› Elsevier Science 🌐 English βš– 690 KB

In this paper, a novel approach to adjusting the weightings of fuzzy neural networks using a Real-coded Chaotic Quantum-inspired genetic Algorithm (RCQGA) is proposed. Fuzzy neural networks are traditionally trained by using gradient-based methods, which may fall into local minimum during the learni

On a class of efficient learning algorit
✍ Frank BΓ€rmann; Friedrich Biegler-KΓΆnig πŸ“‚ Article πŸ“… 1992 πŸ› Elsevier Science 🌐 English βš– 457 KB

The ability of a neural network with one hidden layer to accurately learn a specified learning set increases with the number of nodes in the hidden layer; if a network has exactly the same number of internal nodes as the number of examples to be learnt, it is theoretically able to learn these exampl