Implementing competitive learning using competitive activation: G. G. Sutton III. J. M. Maisog and J. A. Reggia. Department of Computer Science, University of Maryland, College Park, MD 20742 USA
- Book ID
- 103926119
- Publisher
- Elsevier Science
- Year
- 1988
- Tongue
- English
- Weight
- 127 KB
- Volume
- 1
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
โฆ Synopsis
Neural network architectures and activation mechanisms which have circumscribed activation as an emergent property have been important in neurophysioiogy (e.g., visual feature detectors [1]) and cognitive modeling (e.g., letter perception in context [2]). Circumscribed activation, especially winner-take-all behavior, has usually been achieved by direct inhibitory connections between competing nodes. A new scheme for achieving circumscribed activation among competing nodes uses a competitive activation mechanism instead of inhibitory connections [3]. Competitive activation mechanisms minimize the number of connections needed in a network so systems scale up well to large, complex application networks of interest in cognitive science and AI (e.g., print-to-sound transformation [4] and diagnostic problem-solving [5]). However, so far no learning methods have been developed to work with competitive activation mechanisms. This abstract thus descdbes the first learning method adapted to work with competitive activation mechanisms.
Competitive activation mechanisms provide a natural and efficient context for implementing competitive learning. Competitive learning is an unsupervised learning rule which groups Input patterns into classes based on the patterns' structure [1,6,7]. A set of weight vectors determines the class of each input pattern. The basic competitive learning network is a two layer feedforward network. Every input node connects to every output node. The ordered set of weights on the incoming connections to an output node is its weight vector, and a network classifies an input vector by its similarity to these weight vectors. The weight vector of the output unit most similar to a given input pattern (judged by the inner product) is said to "win" the competition for that Input pattern. The winner then adjusts its weight vector so that it will be even more similar to the input vector and thus more likely to win that input pattern the next time it appears. Thus competitive learning forms weight vectors which serve as prototypes for classes of input vectors.
Competitive learning was implemented by using the following competitive activation and learning rules:
๐ SIMILAR VOLUMES