𝔖 Bobbio Scriptorium
✦   LIBER   ✦

A perfect separation of discrete sample points by four-layered perceptron with localized representation

✍ Scribed by Takahumi Oohori; Naoyuki Nagao; Kazuhisa Watanabe


Publisher
John Wiley and Sons
Year
1994
Tongue
English
Weight
847 KB
Volume
25
Category
Article
ISSN
0882-1666

No coin nor oath required. For personal study only.

✦ Synopsis


Abstract

The distributed representation‐type three‐layered perceptron with backpropagation has such problems as the local minimum, long learning time, and ambiguity in internal representation. As a method to cope with those problems, this paper proposes the four‐layered perceptron, together with the learning algorithm, where a hidden layer is added, so that each discrete sample point can perfectly be represented by the corresponding output of the upper hidden layer.

First, the learning algorithm of the perceptron is applied successively to the sample points, and the learning is executed so that the input sample points are separated perfectly by the piecewise sets of hyperplanes. In this mechanism, the output matrix of the lower bidder layer output is nonsingular. Consequently, the following four‐layered perceptron can be constructed, where the output matrix of the upper hidden layer is an identity matrix, and any discrete value can be produced as the output from the output layer by adjusting the network coefficients. Computational experiments are made for the realization of the three‐valued logic function, which is a learning problem on the two‐dimensional plane, as well as the pattern recognition problem by the representative sample points. As a result, it is shown that the learning converges in less than 1/100 computation time, compared to the three‐layered perceptron with the backpropagation.