Adaptive stepsize algorithms for on-line training of neural networks
β Scribed by G.D. Magoulas; V.P. Plagianakos; M.N. Vrahatis
- Publisher
- Elsevier Science
- Year
- 2001
- Tongue
- English
- Weight
- 312 KB
- Volume
- 47
- Category
- Article
- ISSN
- 0362-546X
No coin nor oath required. For personal study only.
β¦ Synopsis
In this paper a method for adapting the stepsize in on-line network training is presented. The proposed technique derives from the stochastic gradient descent proposed by Almeida et al. [On-line Learning in Neural Networks, 111-134, Cambridge University Press, 1998]. The new aspect of our approach consists in taking into consideration previously computed pieces of information regarding the adaptation of the stepsize. The proposed algorithm has been implemented, tested and compared against other on-line methods in three problems. The results shown that it behaves predictably and reliably, and possesses a satisfactory average performance.
π SIMILAR VOLUMES
This paper presents a message-passing architecture simulating multilayer neural networks, adjusting its weights for each pair, consisting of an input vector and a desired output vector. First, the multilayer neural network is defined, and the difficulties arising from parallel implementation are cla
In this paper, a novel approach to adjusting the weightings of fuzzy neural networks using a Real-coded Chaotic Quantum-inspired genetic Algorithm (RCQGA) is proposed. Fuzzy neural networks are traditionally trained by using gradient-based methods, which may fall into local minimum during the learni
The ability of a neural network with one hidden layer to accurately learn a specified learning set increases with the number of nodes in the hidden layer; if a network has exactly the same number of internal nodes as the number of examples to be learnt, it is theoretically able to learn these exampl