Effectiveness of making internal representations redundant in neural networks for solving optimization problems
โ Scribed by Itsuo Kumazawa Member
- Publisher
- John Wiley and Sons
- Year
- 1990
- Tongue
- English
- Weight
- 993 KB
- Volume
- 21
- Category
- Article
- ISSN
- 0882-1666
No coin nor oath required. For personal study only.
โฆ Synopsis
Abstract
In solving an optimization problem using a neural network, the representation schema which codes the solutions of the problem onto the configurations of the states of neurons must first be defined. The coding schema is called an internal representation of the problem and the representation is considered redundant when the schema uses only those configurations which are separate from each other at long Hamming distances. Under some conditions, the redundant representation is shown to be very effective in eliminating the local minima problem, which is one of the most crucial problems in the application of neural networks to optimization problems. In the redundant representation, a stochastic property which appears asymptotically when increasing the number of neurons plays an important role when the solution is read reliably from the random and erroneous states of networks. In these networks, a probabilistic states transition schema is introduced so that the network can escape from the local minima. By introducing the redundancy, the randomly changing configuration can keep the order in which the solution can be read while sufficiently preserving the randomness to escape from the local minima. The result in this paper shows a novel optimization capability of the network which has a highly parallel and nondeterministic computation schema as described in the foregoing.
๐ SIMILAR VOLUMES