This paper proposes a GRG (Greedy Rule Generation) algorithm, a new method for generating classification rules from a data set with discrete attributes. The algorithm is "greedy" in the sense that at every iteration, it searches for the best rule to generate. The criteria for the best rule include t
Extraction of rules from discrete-time recurrent neural networks
โ Scribed by Christian W. Omlin; C.Lee Giles
- Publisher
- Elsevier Science
- Year
- 1996
- Tongue
- English
- Weight
- 986 KB
- Volume
- 9
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
โฆ Synopsis
Absa'act--The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focas of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFAs) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFAs the model which best approximates the learned regular grammar.
๐ SIMILAR VOLUMES
This paper describes a method of extracting diagnostic rules from trained diagnostic feedforward neural nets that are constructed to recognise di!erent mechanical faults using automated weight and structure learning algorithms. The rule extracting method is based on an interpretation that considers
Although the extraction of symbolic knowledge from trained feedforward neural net-ลฝ . works has been widely studied, research in recurrent neural networks RNN has been more neglected, even though it performs better in areas such as control, speech recognition, time series prediction, etc. Nowadays,