๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

Accelerating the convergence of the back-propagation method

โœ Scribed by T. P. Vogl; J. K. Mangis; A. K. Rigler; W. T. Zink; D. L. Alkon


Publisher
Springer-Verlag
Year
1988
Tongue
English
Weight
621 KB
Volume
59
Category
Article
ISSN
0340-1200

No coin nor oath required. For personal study only.

โœฆ Synopsis


The utility of the back-propagation method in establishing suitable weights in a distributed adaptive network has been demonstrated repeatedly. Unfortunately, in many applications, the number of iterations required before convergence can be large. Modifications to the back-propagation algorithm described by can greatly accelerate convergence. The modifications consist of three changes: 1) instead of updating the network weights after each pattern is presented to the network, the network is updated only after the entire repertoire of patterns to be learned has been presented to the network, at which time the algebraic sums of all the weight changes are applied: 2) instead of keeping q, the "learning rate" (i.e., the multiplier on the step size) constant, it is varied dynamically so that the algorithm utilizes a near-optimum q, as determined by the local optimization topography; and 3) the momentum factor c~ is set to zero when, as signified by a failure of a step to reduce the total error, the information inherent in prior steps is more likely to be misleading than beneficial. Only after the network takes a useful step, i.e., one that reduces the total error, does ct again assume a non-zero value. Considering the selection of weights in neural nets as a problem in classical nonlinear optimization theory, the rationale for algorithms seeking only those weights that produce the globally minimum error is reviewed and rejected.


๐Ÿ“œ SIMILAR VOLUMES


Accelerating the convergence of the quad
โœ I.C. Chang ๐Ÿ“‚ Article ๐Ÿ“… 1975 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 252 KB

## An algorithm to solve the divergence problem of the quadratically convergent SCF method is suggested nnd is applied to the calculation of the excited states of a number of small molecu!es. The resuits show definite imp:ovement over tic original method.

A Method of Accelerating the Convergence
โœ M.D. Mikhailov; M.N. ร–ziลŸik ๐Ÿ“‚ Article ๐Ÿ“… 1986 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 498 KB

The series solutions obtained for transport problems by the separation of variables or the integral transform technique often converge very slowly and the d@erentiated series evaluated at the boundary may fail to converge. A general procedure is presented for developing alternative solutions which a

Accelerating the convergence of the coup
โœ Gustavo E. Scuseria; Timothy J. Lee; Henry F. Schaefer III ๐Ÿ“‚ Article ๐Ÿ“… 1986 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 370 KB

The direct mversion of the iterative subspace (DIIS) method is implemented into the closed-shell coupled-cluster single-and double-excitation (CCSD! model to improve the convergence of the coupled non-linear CCSD equations. As with self-consistent field and gradient methods the DIE method proves to