Learning persistent dynamics with neural networks
โ Scribed by H.A Ceccatto; H.D Navone; H Waelbroeck
- Publisher
- Elsevier Science
- Year
- 1998
- Tongue
- English
- Weight
- 296 KB
- Volume
- 11
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
โฆ Synopsis
Persistence is one of the most common characteristics of real-world time series. In this work we investigate the process of learning persistent dynamics by neural networks. We show that for chaotic times series the network can get stuck for long training periods in a trivial minimum of the error function related to the long-term autocorrelation in the series. Remarkably, in these cases the transition to the trained phase is quite abrupt. For noisy dynamics the training process is smooth. We also consider the effectiveness of two of the most frequently used decorrelation methods in avoiding the problems related to persistence. Copyright 1997 Elsevier Science Ltd.
๐ SIMILAR VOLUMES
The template coefficients (weights) of a CNN which will give a desired performance, can either be found by design or by learning. 'By design' means that the desired function to be performed can be translated into a set of local dynamic rules, while 'by learning' is based exclusively on pairs of inpu
The paper presents the universal approach to the determination of the sensitivity functions for dynamic neural networks and its application in learning algorithms of adaptive networks. The method is based on the application of signal flow graph and specially defined graph adjoint to it. The method i
This paper investigates the problem of approximating a dynamical system (DS) by a recurrent neural network (RNN) as one extension of the problem of approximating orbits by an RNN. We systematically investigate how an RNN can produce a DS on the visible state space to approximate a given DS and as a