Concerning the learning problems of recurrent neural networks (RNNs), this paper deals with the problem of approximating a dynamical system (DS) by an RNN as one extension of the problem of approximating trajectories by an RNN. In particular, we systematically investigate how an RNN can produce a DS
Dynamical features simulated by recurrent neural networks
โ Scribed by F. Botelho
- Publisher
- Elsevier Science
- Year
- 1999
- Tongue
- English
- Weight
- 96 KB
- Volume
- 12
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
โฆ Synopsis
The evolution of two-dimensional neural network models with rank one connecting matrices and saturated linear transfer functions is dynamically equivalent to that of piecewise linear maps on an interval. It is shown that their iterative behavior ranges from being highly predictable, where almost every orbit accumulates to an attracting fixed point, to the existence of chaotic regions with cycles of arbitrarily large period.
๐ SIMILAR VOLUMES
In this study we composed a recurrent neural network learning controller and applied it to the swinging up and stabilization problem of the inverted pendulum. A recurrent neural network was trained by a genetic algorithm which had an internal copy operator or inter-individual copy operator. An appro
This paper investigates the problem of approximating a dynamical system (DS) by a recurrent neural network (RNN) as one extension of the problem of approximating orbits by an RNN. We systematically investigate how an RNN can produce a DS on the visible state space to approximate a given DS and as a
In this paper, we prove that any finite time trajectory of a given n-dimensional dynamical system can be approximately realized by the internal state of the output units of a continuous time recurrent neural network with n output units, some hidden units, and an appropriate initial condition. The es
This paper brings together two areas of research that have received considerable attention during the last years, namely feedback linearization and neural networks. A proposition that guarantees the Input/Output (I/O) linearization of nonlinear control affine systems with Dynamic Recurrent Neural N