Accelerating neural network training using weight extrapolations
β Scribed by S.V. Kamarthi; S. Pittner
- Book ID
- 104349005
- Publisher
- Elsevier Science
- Year
- 1999
- Tongue
- English
- Weight
- 204 KB
- Volume
- 12
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
β¦ Synopsis
The backpropagation (BP) algorithm for training feedforward neural networks has proven robust even for difficult problems. However, its high performance results are attained at the expense of a long training time to adjust the network parameters, which can be discouraging in many real-world applications. Even on relatively simple problems, standard BP often requires a lengthy training process in which the complete set of training examples is processed hundreds or thousands of times. In this paper, a universal acceleration technique for the BP algorithm based on extrapolation of each individual interconnection weight is presented. This extrapolation procedure is easy to implement and is activated only a few times in between iterations of the conventional BP algorithm. This procedure, unlike earlier acceleration procedures, minimally alters the computational structure of the BP algorithm. The viability of this new approach is demonstrated on three examples. The results suggest that it leads to significant savings in computation time of the standard BP algorithm. Moreover, the solution computed by the proposed approach is always located in close proximity to the one obtained by the conventional BP procedure. Hence, the proposed method provides a real acceleration of the BP algorithm without degrading the usefulness of its solutions. The performance of the new method is also compared with that of the conjugate gradient algorithm, which is an improved and faster version of the BP algorithm.
π SIMILAR VOLUMES
This paper presents a novel approach to the general problem of the control of processes whose dynamic characteristics are not known, or little known. It demonstrates how a system consisting of a relatively small number of neuronlike elements can be used to control a wide variety of processes with li
Neural network weights are subject to errors caused by technological tolerances when implemented in digital or analog hardware. Since these random variations are unavoidable and unpredictable, they can seriously affect the expected performances. This work proposes a learning algorithm that takes wei