Dynamical systems produced by recurrent neural networks
β Scribed by Masahiro Kimura; Ryohei Nakano
- Publisher
- John Wiley and Sons
- Year
- 2000
- Tongue
- English
- Weight
- 237 KB
- Volume
- 31
- Category
- Article
- ISSN
- 0882-1666
No coin nor oath required. For personal study only.
β¦ Synopsis
Concerning the learning problems of recurrent neural networks (RNNs), this paper deals with the problem of approximating a dynamical system (DS) by an RNN as one extension of the problem of approximating trajectories by an RNN. In particular, we systematically investigate how an RNN can produce a DS on the visible state space to approximate a target DS. First, it is proved that RNNs without hidden units uniquely produce a certain class of DSs. Next, a neural dynamical system (NDS) is proposed as such a DS that an RNN with hidden units can produce on the visible state space, and affine neural dynamical systems (A-NDSs) are constructed as concrete examples of NDSs. Moreover, we prove that any DS on a Euclidean space can be finitely approximated by some A-NDS with any precision, and propose adopting an A-NDS as such a DS that an RNN with hidden units produces to approximate a target DS.
π SIMILAR VOLUMES
In this study we composed a recurrent neural network learning controller and applied it to the swinging up and stabilization problem of the inverted pendulum. A recurrent neural network was trained by a genetic algorithm which had an internal copy operator or inter-individual copy operator. An appro
A new paradigm called self-recurrent neural network (SRNN) is proposed. Two SRNNs are utilized in a control system, one as an emulator and the other as a controller. To guarantee convergence and for faster learning, an approach using adaptive learning rate is developed by Lyapunov function. Finally,