Discretizing continuous neural networks using a polarization learning rule
โ Scribed by Lifeng Wang; H.D. Cheng
- Publisher
- Elsevier Science
- Year
- 1997
- Tongue
- English
- Weight
- 661 KB
- Volume
- 30
- Category
- Article
- ISSN
- 0031-3203
No coin nor oath required. For personal study only.
โฆ Synopsis
Discrete neural networks are simpler than their continuous counterparts, can obtain more stable solutions, and their hidden layer representations are easier to interpret. This paper presents a polarization learning rule for discretizing multi-layer neural networks with continuous activation functions. This role forces the activation value of a neuron towards the two poles of its activation function. First, we use this role in the form of a modified error function to discretize the hidden units of a back-propagation network. Then, we apply the same principle to the second-order recurrent networks to solve grammatical inference problems. The experimental results are superior to the ones using existing approaches.
๐ SIMILAR VOLUMES
The following learning problem is considered, for continuous-time recurrent neural networks having sigmoidal activation functions. Given a "black box" representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to ap
This paper proposes a GRG (Greedy Rule Generation) algorithm, a new method for generating classification rules from a data set with discrete attributes. The algorithm is "greedy" in the sense that at every iteration, it searches for the best rule to generate. The criteria for the best rule include t
How could synapse number and position on a dendrite affect neuronal behavior with respect to the decoding of firing rate and temporal pattern? We developed a model of a neuron with a passive dendrite and found that dendritic length and the particular synapse positions directly determine the behavior