𝔖 Bobbio Scriptorium
✦   LIBER   ✦

A convergent generator of neural networks

✍ Scribed by Pierre Courrieu


Publisher
Elsevier Science
Year
1993
Tongue
English
Weight
822 KB
Volume
6
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

✦ Synopsis


This article presents a new algorithm.[br the automatic generation of neural architectures and supervised learning. Given a set of examples, the algorithm generates an architecture and s),naptic weights that are adapted to the sampled problem. The algorithm supports an), type and amount of numerical input and output. The learning/ generation process is guaranteed to converge in a strictly finite number of steps. 147th the exception of certain a priori nonoptimal architectures, all architectures without internal loops are potentially accessible, and the algorithm tends to generate architectures of minimal complexit); giving it high generalization performance in the learned domain.


πŸ“œ SIMILAR VOLUMES


A class of convergent neural network dyn
✍ Bernold Fiedler; TomΓ‘Ε‘ Gedeon πŸ“‚ Article πŸ“… 1998 πŸ› Elsevier Science 🌐 English βš– 340 KB

We consider a class of systems of differential equations in Nn which exhibits convergent dynamics. We find a Lyapunov function and show that every bounded trajectory converges to the set of equilibria. Our result generalizes the results of Cohen and Grossberg (1983) for convergent neural networks. I

Convergence of a self-organizing stochas
✍ Olivier Francois; Jacques Demongeot; Thierry Herve πŸ“‚ Article πŸ“… 1992 πŸ› Elsevier Science 🌐 English βš– 451 KB

In this paper, we focus on the convergence of a stochastic neural process. In this process, a "physiologically plausible" Hebb's learning rule gives rise to a self-organization phenomenon. Some preliminary results concern the asymptotic behaviour of the nework given that the update of neurons is eit

Development of a generalized neural netw
✍ Greger G Andersson; Peter Kaufmann πŸ“‚ Article πŸ“… 2000 πŸ› Elsevier Science 🌐 English βš– 162 KB

The interest for neural networks has grown concomitantly with the increased awareness of the ubiquity of non-linear systems. The main focus on improvements in this field has been on the development of different algorithms that either speed up the convergence rate andror avoid entrapment in local min