𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Analysis of the error back-propagation learning algorithms with gain

✍ Scribed by Qi Jia; Katsuyuki Hagiwara; Shiro Usui; Naohiro Toda


Publisher
John Wiley and Sons
Year
1995
Tongue
English
Weight
768 KB
Volume
26
Category
Article
ISSN
0882-1666

No coin nor oath required. For personal study only.

✦ Synopsis


Abstract

As the method to accelerate the learning by error back‐propagation, several studies have been proposed in which the parameter called gain is introduced. In those studies, however, the acceleration effect is evaluated only numerically, and there is no theoretical analysis of the effect of the gain on the learning process.

This paper points out that those studies can also be realized by methods without introducing the gain, and presents a detailed analysis of the effect of the gain from a unified viewpoint. The following properties are revealed as a result. The error back‐propagation method, in which a constant gain is introduced, can be reduced to the ordinary error back‐propagation method without introducing the gain. When the dynamic gain is introduced, the method cannot be reduced to the steepest descent method, as well as the momentum method, without introducing the gain. Furthermore, it is shown that there exists a characteristic superellipse that determines the behavior of the gain.

By analyzing the characteristic superellipse, a theoretical basis is provided for the instability of the method introducing the dynamic gain. This paper presents a unified treatment of the method introducing the gain and the method not introducing the gain from a unified viewpoint which have been considered independently. The effect of the gain on the learning process is analyzed, which will help in developing a new learning method in the future.


📜 SIMILAR VOLUMES