𝔖 Bobbio Scriptorium
✦   LIBER   ✦

A forward-propagation learning rule for neural inverse models in consideration of the correlation of propagated errors

✍ Scribed by Yoshihiro Ohama; Naohiro Fukumura; Yoji Uno


Publisher
John Wiley and Sons
Year
2006
Tongue
English
Weight
600 KB
Volume
37
Category
Article
ISSN
0882-1666

No coin nor oath required. For personal study only.

✦ Synopsis


Abstract

We have proposed the forward‐propagation rule (FP) as an inverse model learning scheme from the viewpoint of biological motor control. This learning scheme is based on a Newton‐like method, by which multilayered neural network can acquire an inverse model of the controlled object by a small number of iterative learning trials. There is a problem, however, that the learning procedure, which is characterized by estimation of the supervisor's signal for the input–output signal of the neuron, and also the solution of a linear multiple regression problem for updating the connection weights, is complicated, making it difficult to analyze the learning process. This paper introduces the correlation of the propagated error signal from the viewpoint of the maximum‐likelihood method in order to realize a goal‐directed learning, which has not hitherto been considered in FP, and extends the learning rule to the generalized least‐square method. As a result, it is clearly shown that the learning rule in FP is an approximate gradient method. The learning ability of the method is demonstrated by computer simulation. The proposed procedure contains a regularization term derived from the logarithmic likelihood, and the behavior after the convergence of learning exhibits a more stable tendency than in the conventional method. It is also shown that learning can be performed by a simplified method in which the error is simply propagated in the forward direction. © 2006 Wiley Periodicals, Inc. Syst Comp Jpn, 37(13): 54–66, 2006; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.20484


📜 SIMILAR VOLUMES


A concurrent learning algorithm of forwa
✍ Satoshi Yamaguchi; Nozomu Okazaki; Hidekiyo Itakura 📂 Article 📅 1995 🏛 John Wiley and Sons 🌐 English ⚖ 768 KB

## Abstract This paper proposes a concurrent learning algorithm for forward and inverse modeling. The algorithm is consisted of two phases. In the first phase, a feedback controller is used. The forward model is trained using the output values of the controller as the input values to the system and