๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

Extended Kalman Filter Training of Neural Networks on a SIMD Parallel Machine

โœ Scribed by Shuhui Li; Donald C. Wunsch; Edgar O'Hair; Michael G. Giesselmann


Publisher
Elsevier Science
Year
2002
Tongue
English
Weight
260 KB
Volume
62
Category
Article
ISSN
0743-7315

No coin nor oath required. For personal study only.

โœฆ Synopsis


The extended Kalman filter (EKF) algorithm has been shown to be advantageous for neural network trainings. However, unlike the backpropagation (BP), many matrix operations are needed for the EKF algorithm and therefore greatly increase the computational complexity. This paper presents a method to do the EKF training on a SIMD parallel machine. We use a multistream decoupled extended Kalman filter (DEKF) training algorithm which can provide efficient use of the parallel resource and more improved trained network weights. From the overall design consideration of the DEKF algorithm and the consideration of maximum usage of the parallel resource, the multistream DEKF training is realized on a MasPar SIMD parallel machine. The performance of the parallel DEKF training algorithm is studied. Comparisons are performed to investigate pattern and batch-form trainings for both EKF and BP training algorithms.


๐Ÿ“œ SIMILAR VOLUMES


Parallel Implementation of a Recursive L
โœ J.E. Steck; B. Mcmillin; K. Krishnamurthy; G.G. Leininger ๐Ÿ“‚ Article ๐Ÿ“… 1993 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 391 KB

An algorithm based on the Marquardt-Levenberg leastsquares optimization method has been shown by \(S\). Kollias and D. Anastasiou to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in com