In this paper we present a new algorithm, which is orders of magnitude faster than the delta rule, for training feed-forward neural networks. It provides a substantial improvement over the method of Scalero and Tepedelenlioglu (IEEE Trans. Signal Process. 40(1) (1992)) in both training time and nume
Hybrid learning schemes for fast training of feed-forward neural networks
β Scribed by Nicolaos B. Karayiannis
- Publisher
- Elsevier Science
- Year
- 1996
- Tongue
- English
- Weight
- 905 KB
- Volume
- 41
- Category
- Article
- ISSN
- 0378-4754
No coin nor oath required. For personal study only.
β¦ Synopsis
Fast training of feed-forward neural networks became increasingly important as the neural network field moves toward maturity. This paper begins with a review of various criteria proposed for training feed-forward neural networks, which include the frequently used quadratic error criterion, the relative entropy criterion, and a generalized training criterion. The minimization of these criteria using the gradient descent method results in a variety of supervised learning algorithms. The performance of these algorithms in complex training tasks is strongly affected by the initial set of internal representations, which are usually formed by a randomly generated set of synaptic weights. The convergence of gradient descent based learning algorithms in complex training tasks can be significantly improved by initializing the internal representations using an unsupervised learning process based on linear or nonlinear generalized Hebbian learning rules. The efficiency of the hybrid learning scheme presented in this paper is illustrated through experimental results.
π SIMILAR VOLUMES
Device-independent color imaging demands a reliable color-appearance model. We present a method for faithfully approximating color-appearance models by means of feed-forward neural networks trained with the error back-propagation algorithm. In particular, we present experimental evidence that in sev
## Abstract The multilayer feedβforward ANN is an important modeling technique used in QSAR studying. The training of ANN is usually carried out only to optimize the weights of the neural network and without paying attention to the network topology. Some other strategies used to train ANN are, firs