𝔖 Bobbio Scriptorium
✦   LIBER   ✦

A realtime learning algorithm for recurrent neural networks

✍ Scribed by Tadasu Uchiyama; Katsunori Shimohara


Publisher
John Wiley and Sons
Year
1991
Tongue
English
Weight
438 KB
Volume
22
Category
Article
ISSN
0882-1666

No coin nor oath required. For personal study only.


πŸ“œ SIMILAR VOLUMES


A learning result for continuous-time re
✍ Eduardo D. Sontag πŸ“‚ Article πŸ“… 1998 πŸ› Elsevier Science 🌐 English βš– 117 KB

The following learning problem is considered, for continuous-time recurrent neural networks having sigmoidal activation functions. Given a "black box" representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to ap

A learning algorithm for oscillatory cel
✍ C.Y. Ho; H. Kurokawa πŸ“‚ Article πŸ“… 1999 πŸ› Elsevier Science 🌐 English βš– 566 KB

We present a cellular type oscillatory neural network for temporal segregation of stationary input patterns. The model comprises an array of locally connected neural oscillators with connections limited to a 4-connected neighborhood. The architecture is reminiscent of the wellknown cellular neural n

FUNCOM: A constrained learning algorithm
✍ Paris Mastorocostas; John Theocharis πŸ“‚ Article πŸ“… 2000 πŸ› Elsevier Science 🌐 English βš– 384 KB

A novel learning algorithm, the FUNCOM (Fuzzy Neural Constrained Optimization Method) is suggested in this paper, for training fuzzy neural networks. The training task is formulated as a constrained optimization problem, whose objective is twofold: (i) minimization of an error measure, leading to su

Effective learning in recurrent max–min
✍ Loo-Nin Teow; Kia-Fock Loe πŸ“‚ Article πŸ“… 1998 πŸ› Elsevier Science 🌐 English βš– 208 KB

Max and min operations have interesting properties that facilitate the exchange of information between the symbolic and real-valued domains. As such, neural networks that employ max-min activation functions have been a subject of interest in recent years. Since max-min functions are not strictly dif

On a class of efficient learning algorit
✍ Frank BΓ€rmann; Friedrich Biegler-KΓΆnig πŸ“‚ Article πŸ“… 1992 πŸ› Elsevier Science 🌐 English βš– 457 KB

The ability of a neural network with one hidden layer to accurately learn a specified learning set increases with the number of nodes in the hidden layer; if a network has exactly the same number of internal nodes as the number of examples to be learnt, it is theoretically able to learn these exampl