𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Effective learning in recurrent max–min neural networks

✍ Scribed by Loo-Nin Teow; Kia-Fock Loe


Publisher
Elsevier Science
Year
1998
Tongue
English
Weight
208 KB
Volume
11
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

✦ Synopsis


Max and min operations have interesting properties that facilitate the exchange of information between the symbolic and real-valued domains. As such, neural networks that employ max-min activation functions have been a subject of interest in recent years. Since max-min functions are not strictly differentiable, we propose a mathematically sound learning method based on using Fourier convergence analysis of side-derivatives to derive a gradient descent technique for max-min error functions. We then propose a novel recurrent max-min neural network model that is trained to perform grammatical inference as an application example. Comparisons made between this model and recurrent sigmoidal neural networks show that our model not only performs better in terms of learning speed and generalization, but that its final weight configuration allows a deterministic finite automation (DFA) to be extracted in a straightforward manner. In essence, we are able to demonstrate that our proposed gradient descent technique does allow max-min neural networks to learn effectively.


📜 SIMILAR VOLUMES


Max–min fuzzy Hopfield neural networks a
✍ Puyin Liu 📂 Article 📅 2000 🏛 Elsevier Science 🌐 English ⚖ 117 KB

We set up a dynamical fuzzy neural network system, i.e. the so-called max-min fuzzy Hopÿeld network in the paper, and prove the Lyapunov stability of the equilibrium points (attractor) of the system. Also, we discuss the uniform stability of the system and show some su cient conditions, with which t

A learning result for continuous-time re
✍ Eduardo D. Sontag 📂 Article 📅 1998 🏛 Elsevier Science 🌐 English ⚖ 117 KB

The following learning problem is considered, for continuous-time recurrent neural networks having sigmoidal activation functions. Given a "black box" representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to ap

Learning dynamical systems by recurrent
✍ M. Kimura; R. Nakano 📂 Article 📅 1998 🏛 Elsevier Science 🌐 English ⚖ 253 KB

This paper investigates the problem of approximating a dynamical system (DS) by a recurrent neural network (RNN) as one extension of the problem of approximating orbits by an RNN. We systematically investigate how an RNN can produce a DS on the visible state space to approximate a given DS and as a

Part of speech tagging with min-max modu
✍ Qing Ma; Bao-Liang Lu; Hitoshi Isahara; Michinori Ichikawa 📂 Article 📅 2002 🏛 John Wiley and Sons 🌐 English ⚖ 281 KB

## Abstract A parts of speech (POS) tagging system using neural networks has been developed by Ma and colleagues. This system can tag unlearned data with a much higher accuracy than that of the Hidden Markov Model (HMM), which is the most popular method of POS tagging. It does so by learning a smal