Concerning the learning problems of recurrent neural networks (RNNs), this paper deals with the problem of approximating a dynamical system (DS) by an RNN as one extension of the problem of approximating trajectories by an RNN. In particular, we systematically investigate how an RNN can produce a DS
Learning dynamical systems by recurrent neural networks from orbits
โ Scribed by M. Kimura; R. Nakano
- Publisher
- Elsevier Science
- Year
- 1998
- Tongue
- English
- Weight
- 253 KB
- Volume
- 11
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
โฆ Synopsis
This paper investigates the problem of approximating a dynamical system (DS) by a recurrent neural network (RNN) as one extension of the problem of approximating orbits by an RNN. We systematically investigate how an RNN can produce a DS on the visible state space to approximate a given DS and as a first step to the generalization problem for RNNs, we also investigate whether or not a DS produced by some RNN can be identified from several observed orbits of the DS. First, it is proved that RNNs without hidden units uniquely produce a certain class of DS. Next, neural dynamical systems (NDSs) are proposed as DSs produced by RNNs with hidden units. Moreover, affine neural dynamial systems (A-NDSs) are provided as nontrivial examples of NDSs and it is proved that any DS can be finitely approximated by an A-NDS with any precision. We propose an A-NDS as a DS that an RNN can actually produce on the visible state space to approximate the target DS. For the generalization problem of RNNs, a geometric criterion is derived in the case of RNNs without hidden units. This theory is also extended to the case of RNNs with hidden units for learning A-NDSs.
๐ SIMILAR VOLUMES
In this paper, we prove that any finite time trajectory of a given n-dimensional dynamical system can be approximately realized by the internal state of the output units of a continuous time recurrent neural network with n output units, some hidden units, and an appropriate initial condition. The es
The evolution of two-dimensional neural network models with rank one connecting matrices and saturated linear transfer functions is dynamically equivalent to that of piecewise linear maps on an interval. It is shown that their iterative behavior ranges from being highly predictable, where almost eve
In this study we composed a recurrent neural network learning controller and applied it to the swinging up and stabilization problem of the inverted pendulum. A recurrent neural network was trained by a genetic algorithm which had an internal copy operator or inter-individual copy operator. An appro