๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

Dynamical features simulated by recurrent neural networks

โœ Scribed by F. Botelho


Publisher
Elsevier Science
Year
1999
Tongue
English
Weight
96 KB
Volume
12
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

โœฆ Synopsis


The evolution of two-dimensional neural network models with rank one connecting matrices and saturated linear transfer functions is dynamically equivalent to that of piecewise linear maps on an interval. It is shown that their iterative behavior ranges from being highly predictable, where almost every orbit accumulates to an attracting fixed point, to the existence of chaotic regions with cycles of arbitrarily large period.


๐Ÿ“œ SIMILAR VOLUMES


Dynamical systems produced by recurrent
โœ Masahiro Kimura; Ryohei Nakano ๐Ÿ“‚ Article ๐Ÿ“… 2000 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 237 KB ๐Ÿ‘ 2 views

Concerning the learning problems of recurrent neural networks (RNNs), this paper deals with the problem of approximating a dynamical system (DS) by an RNN as one extension of the problem of approximating trajectories by an RNN. In particular, we systematically investigate how an RNN can produce a DS

Dynamical control by recurrent neural ne
โœ Toru Kumagai; Mitsuo Wada; Ryoichi Hashimoto; Akio Utsugi ๐Ÿ“‚ Article ๐Ÿ“… 1999 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 135 KB ๐Ÿ‘ 2 views

In this study we composed a recurrent neural network learning controller and applied it to the swinging up and stabilization problem of the inverted pendulum. A recurrent neural network was trained by a genetic algorithm which had an internal copy operator or inter-individual copy operator. An appro

Learning dynamical systems by recurrent
โœ M. Kimura; R. Nakano ๐Ÿ“‚ Article ๐Ÿ“… 1998 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 253 KB

This paper investigates the problem of approximating a dynamical system (DS) by a recurrent neural network (RNN) as one extension of the problem of approximating orbits by an RNN. We systematically investigate how an RNN can produce a DS on the visible state space to approximate a given DS and as a

Approximation of dynamical systems by co
โœ Ken-ichi Funahashi; Yuichi Nakamura ๐Ÿ“‚ Article ๐Ÿ“… 1993 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 431 KB

In this paper, we prove that any finite time trajectory of a given n-dimensional dynamical system can be approximately realized by the internal state of the output units of a continuous time recurrent neural network with n output units, some hidden units, and an appropriate initial condition. The es

Input/output linearization using dynamic
โœ A. Delgado; C. Kambhampati; K. Warwick ๐Ÿ“‚ Article ๐Ÿ“… 1996 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 408 KB

This paper brings together two areas of research that have received considerable attention during the last years, namely feedback linearization and neural networks. A proposition that guarantees the Input/Output (I/O) linearization of nonlinear control affine systems with Dynamic Recurrent Neural N