Hybrid interior point training of modular neural networks
โ Scribed by Peter T. Szymanski; Michael Lemmon; Christopher J. Bett
- Book ID
- 104348807
- Publisher
- Elsevier Science
- Year
- 1998
- Tongue
- English
- Weight
- 571 KB
- Volume
- 11
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
โฆ Synopsis
Modular neural networks use a single gating neuron to select the outputs of a collection of agent neurons. Expectation-maximization (EM) algorithms provide one way of training modular neural networks to approximate non-linear functionals. This paper introduces a hybrid interior-point (HIP) algorithm for training modular networks. The HIP algorithm combines an interior-point linear programming (LP) algorithm with a Newton-Raphson iteration in such a way that the computational efficiency of the interior point LP methods is preserved. The algorithm is formally proven to converge asymptotically to locally optimal networks with a total computational cost that scales in a polynomial manner with problem size. Simulation experiments show that the HIP algorithm produces networks whose average approximation error is better than that of EM-trained networks. These results also demonstrate that the computational cost of the HIP algorithm scales at a slower rate than the EM-procedure and that, for small-size networks, the total computational costs of both methods are comparable.
๐ SIMILAR VOLUMES
Fast training of feed-forward neural networks became increasingly important as the neural network field moves toward maturity. This paper begins with a review of various criteria proposed for training feed-forward neural networks, which include the frequently used quadratic error criterion, the rela