A learning result for continuous-time recurrent neural networks
β Scribed by Eduardo D. Sontag
- Publisher
- Elsevier Science
- Year
- 1998
- Tongue
- English
- Weight
- 117 KB
- Volume
- 34
- Category
- Article
- ISSN
- 0167-6911
No coin nor oath required. For personal study only.
β¦ Synopsis
The following learning problem is considered, for continuous-time recurrent neural networks having sigmoidal activation functions. Given a "black box" representing an unknown system, measurements of output derivatives are collected, for a set of randomly generated inputs, and a network is used to approximate the observed behavior. It is shown that the number of inputs needed for reliable generalization (the sample complexity of the learning problem) is upper bounded by an expression that grows polynomially with the dimension of the network and logarithmically with the number of output derivatives being matched.
π SIMILAR VOLUMES
## Abstract Various types of neural networks have been proposed in previous papers for applications in hydrological events. However, most of these applied neural networks are classified as static neural networks, which are based on batch processes that update action only after the whole training da
A multilayer recurrent neural network is proposed for solving continuous-time algebraic matrix Riccati equations in real time. The proposed recurrent neural network consists of four bidirectionally connected layers. Each layer consists of an array of neurons. The proposed recurrent neural network is