Learning in multilayer perceptrons using global optimization strategies
β Scribed by V.P. Plagianakos; G.D. Magoulas; M.N. Vrahatis
- Publisher
- Elsevier Science
- Year
- 2001
- Tongue
- English
- Weight
- 337 KB
- Volume
- 47
- Category
- Article
- ISSN
- 0362-546X
No coin nor oath required. For personal study only.
β¦ Synopsis
Learning algorithms for multilayer perceptrons are usually based on local minimization methods that can be often trapped in a local minimum of the error function. In this work, the use of global optimization strategies for training multilayer perceptrons is investigated. These methods are expected to lead to "optimal" or "near-optimal" weight configurations by allowing the network to escape local minima during training. The paper reviews the fundamentals of a recently proposed deflection procedure, simulated annealing, genetic and evolutionary algorithms, and introduces a new differential evolution strategy. Simulations and comparisons are presented.
π SIMILAR VOLUMES
Each year, the U.S. Army procures billions of dollars worth of weapons and equipment. The process of deciding what to buy, when to buy, and in what quantities is extremely complex, requiring extensive analysis. Two techniques used in this analysis are mathematical programming and cost estimation. Al
A new strategy for global geometry optimization of clusters is presented. Important features are a restriction of search space to favorable nearest-neighbor distance ranges, a suitable cluster growth representation with diminished correlations, and easy transferability of the results to larger clust