In this paper, we prove that any finite time trajectory of a given n-dimensional dynamical system can be approximately realized by the internal state of the output units of a continuous time recurrent neural network with n output units, some hidden units, and an appropriate initial condition. The es
Dynamical approximation by recurrent neural networks
โ Scribed by Max Garzon; Fernanda Botelho
- Book ID
- 114297204
- Publisher
- Elsevier Science
- Year
- 1999
- Tongue
- English
- Weight
- 291 KB
- Volume
- 29
- Category
- Article
- ISSN
- 0925-2312
No coin nor oath required. For personal study only.
๐ SIMILAR VOLUMES
The evolution of two-dimensional neural network models with rank one connecting matrices and saturated linear transfer functions is dynamically equivalent to that of piecewise linear maps on an interval. It is shown that their iterative behavior ranges from being highly predictable, where almost eve
Concerning the learning problems of recurrent neural networks (RNNs), this paper deals with the problem of approximating a dynamical system (DS) by an RNN as one extension of the problem of approximating trajectories by an RNN. In particular, we systematically investigate how an RNN can produce a DS
This paper investigates the problem of approximating a dynamical system (DS) by a recurrent neural network (RNN) as one extension of the problem of approximating orbits by an RNN. We systematically investigate how an RNN can produce a DS on the visible state space to approximate a given DS and as a
In this study we composed a recurrent neural network learning controller and applied it to the swinging up and stabilization problem of the inverted pendulum. A recurrent neural network was trained by a genetic algorithm which had an internal copy operator or inter-individual copy operator. An appro