A self-organizing neural-network-based fuzzy system
โ Scribed by Yin Wang; Gang Rong
- Publisher
- Elsevier Science
- Year
- 1999
- Tongue
- English
- Weight
- 716 KB
- Volume
- 103
- Category
- Article
- ISSN
- 0165-0114
No coin nor oath required. For personal study only.
โฆ Synopsis
A neural-network-based fuzzy system (NNFS) is proposed in this paper. It is a self-organizing neural-network which can partition the input spaces in a flexible way, based on the distribution of the training data in order to reduce the number of rules without any loss of modeling accuracy. Associated with the NNFS is a two-phase hybrid learning algorithm which utilizes a nearest-neighborhood clustering scheme for both structure learning and initial parameters setting and a gradient descent method for fine tuning the parameters of the NNFS. By combining the above two methods, the learning speed converges much faster than the original back-propagation algorithm. Simulation results suggest that the NNFS has merits of simple structure, fast learning speed, few fuzzy logic rules and relatively high modeling accuracy. Finally, the NNFS is applied to the construction of the soft sensor for a distillation column.
๐ SIMILAR VOLUMES
A new model for generalized fuzzy inference neural networks (GFINN) is proposed in this paper. The networks consist of three layers: an input-output layer, an if layer, and a then layer. In each layer, there are the operational nodes. A GFINN can perform three representative fuzzy inference methods
The description of the attributes or characteristics of the individual parts in a featurebased clustering system is frequently vague, and linguistic, fuzzy number or fuzzy coding is ideally suited to represent these attributes. However, due to the vagueness of the description, the resulting fuzzy me
In this paper, we focus on the convergence of a stochastic neural process. In this process, a "physiologically plausible" Hebb's learning rule gives rise to a self-organization phenomenon. Some preliminary results concern the asymptotic behaviour of the nework given that the update of neurons is eit
Although the extraction of symbolic knowledge from trained feedforward neural net-ลฝ . works has been widely studied, research in recurrent neural networks RNN has been more neglected, even though it performs better in areas such as control, speech recognition, time series prediction, etc. Nowadays,
In this paper we present a self-organizing neural network model of early lexical development called DevLex. The network consists of two self-organizing maps (a growing semantic map and a growing phonological map) that are connected via associative links trained by Hebbian learning. The model capture