A convergent generator of neural networks
β Scribed by Pierre Courrieu
- Publisher
- Elsevier Science
- Year
- 1993
- Tongue
- English
- Weight
- 822 KB
- Volume
- 6
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
β¦ Synopsis
This article presents a new algorithm.[br the automatic generation of neural architectures and supervised learning. Given a set of examples, the algorithm generates an architecture and s),naptic weights that are adapted to the sampled problem. The algorithm supports an), type and amount of numerical input and output. The learning/ generation process is guaranteed to converge in a strictly finite number of steps. 147th the exception of certain a priori nonoptimal architectures, all architectures without internal loops are potentially accessible, and the algorithm tends to generate architectures of minimal complexit); giving it high generalization performance in the learned domain.
π SIMILAR VOLUMES
We consider a class of systems of differential equations in Nn which exhibits convergent dynamics. We find a Lyapunov function and show that every bounded trajectory converges to the set of equilibria. Our result generalizes the results of Cohen and Grossberg (1983) for convergent neural networks. I
In this paper, we focus on the convergence of a stochastic neural process. In this process, a "physiologically plausible" Hebb's learning rule gives rise to a self-organization phenomenon. Some preliminary results concern the asymptotic behaviour of the nework given that the update of neurons is eit
The interest for neural networks has grown concomitantly with the increased awareness of the ubiquity of non-linear systems. The main focus on improvements in this field has been on the development of different algorithms that either speed up the convergence rate andror avoid entrapment in local min