𝔖 Bobbio Scriptorium
✦   LIBER   ✦

On rule pruning using fuzzy neural networks

✍ Scribed by Nikhil R. Pal; Tandra Pal


Publisher
Elsevier Science
Year
1999
Tongue
English
Weight
815 KB
Volume
106
Category
Article
ISSN
0165-0114

No coin nor oath required. For personal study only.

✦ Synopsis


Shann and Fu (SF) proposed a fuzzy neural network (FNN) for role pruning in a fuzzy controller. In this paper we first analyze the FNN of SF and discuss some of its limitations. SF attempted to eliminate redundant rules interpreting some of the connection weights as certainty factors of rules. In their strategy the connection weights are unrestricted in sign and hence their interpretation as certainty factors introduces some inconsistencies into the scheme. We propose a modification of this FNN, which eliminates these inconsistencies. Moreover, we also propose a pruning scheme which, unlike the scheme of SF, always produces a compatible rule set. Superiority of the modified FNN is established using the inverted pendulum problem.


πŸ“œ SIMILAR VOLUMES


Selective descriptor pruning for QSAR/QS
✍ Joseph V. Turner; David J. Cutler; Ian Spence; Desmond J. Maddalena πŸ“‚ Article πŸ“… 2003 πŸ› John Wiley and Sons 🌐 English βš– 105 KB

## Abstract Selection of optimal descriptors in quantitative structure–activity–property relationship (QSAR/QSPR) studies has been a perennial problem. Artificial Neural Networks (ANNs) have been used widely in QSAR/QSPR studies but less widely in descriptor selection. The current study used ANNs t

Fuzzy regression using asymmetric fuzzy
✍ Hisao Ishibuchi; Manabu Nii πŸ“‚ Article πŸ“… 2001 πŸ› Elsevier Science 🌐 English βš– 277 KB

In this paper, ΓΏrst we explain several versions of fuzzy regression methods based on linear fuzzy models with symmetric triangular fuzzy coe cients. Next we point out some limitations of such fuzzy regression methods. Then we extend the symmetric triangular fuzzy coe cients to asymmetric triangular

Stability analysis of neural net control
✍ Thomas Feuring; James J. Buckley; Wolfram-M. Lippe; Andreas Tenhagen πŸ“‚ Article πŸ“… 1999 πŸ› Elsevier Science 🌐 English βš– 810 KB

Neural networks can only be trained with a crisp and finite data set. Therefore, the approximation quality of a trained network is hard to verify. So, a common way in proving stability of a trained neural net controller is to demonstrate the existence of a Lyapunov function. In this article we propo