𝔖 Bobbio Scriptorium
✦   LIBER   ✦

A second-order learning algorithm for multilayer networks based on block Hessian matrix

✍ Scribed by Yi-Jen Wang; Chin-Teng Lin


Publisher
Elsevier Science
Year
1998
Tongue
English
Weight
392 KB
Volume
11
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

✦ Synopsis


This article proposes a new second-order learning algorithm for training the multilayer perceptron (MLP) networks. The proposed algorithm is a revised Newton's method. A forward-backward propagation scheme is first proposed for network computation of the Hessian matrix, H, of the output error function of the MLP. A block Hessian matrix, H b , is then defined to approximate and simplify H. Several lemmas and theorems are proved to uncover the important properties of H and H b , and verify the good approximation of H b to H; H b preserves the major properties of H. The theoretic analysis leads to the development of an efficient way for computing the inverse of H b recursively. In the proposed second-order learning algorithm, the least squares estimation technique is adopted to further lessen the local minimum problems. The proposed algorithm overcomes not only the drawbacks of the standard backpropagation algorithm (i.e. slow asymptotic convergence rate, bad controllability of convergence accuracy, local minimum problems, and high sensitivity to learning constant), but also the shortcomings of normal Newton's method used on the MLP, such as the lack of network implementation of H, ill representability of the diagonal terms of H, the heavy computation load of the inverse of H, and the requirement of a good initial estimate of the solution (weights). Several example problems are used to demonstrate the efficiency of the proposed learning algorithm. Extensive performance (convergence rate and accuracy) comparisons of the proposed algorithm with other learning schemes (including the standard backpropagation algorithm) are also made.


📜 SIMILAR VOLUMES


A learning algorithm for multilayered ne
✍ Friedrich Biegler-König; Frank Bärmann 📂 Article 📅 1993 🏛 Elsevier Science 🌐 English ⚖ 365 KB

An algorithm ./or the training of mtdtilayered neural networks solely based on linear algebraic methods is presented. Its convergence speed up to a certain limit t~flearning accura~3' is orders o./magnitude better than that of the classical back propagation. Furthermore. its learning aptitude increa