A parallel algorithm for gradient training of feedforward neural networks
✍ Scribed by Zdeněk Hanzálek
- Publisher
- Elsevier Science
- Year
- 1998
- Tongue
- English
- Weight
- 742 KB
- Volume
- 24
- Category
- Article
- ISSN
- 0167-8191
No coin nor oath required. For personal study only.
✦ Synopsis
This paper presents a message-passing architecture simulating multilayer neural networks, adjusting its weights for each pair, consisting of an input vector and a desired output vector. First, the multilayer neural network is defined, and the difficulties arising from parallel implementation are clarified using Petri nets. Then the implementation of a neuron, split into the synapse and body, is proposed by arranging virtual processors in a cascaded torus topology. Mapping virtual processors onto node processors is done with the intention of minimizing external communication. Then, internal communication is reduced and implementation on a physical message-passing architecture is given. A time complexity analysis arises from the algorithm specification and some simplifying assumptions. Theoretical results are compared with experimental ones measured on a transputer based machine. Finally the algorithm based on the splitting operation is compared with a classical one.
📜 SIMILAR VOLUMES