𝔖 Bobbio Scriptorium
✦   LIBER   ✦

On a class of efficient learning algorithms for neural networks

✍ Scribed by Frank Bärmann; Friedrich Biegler-König


Publisher
Elsevier Science
Year
1992
Tongue
English
Weight
457 KB
Volume
5
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

✦ Synopsis


The ability of a neural network with one hidden layer to accurately learn a specified learning set increases with the number of nodes in the hidden layer; if a network has exactly the same number of internal nodes as the number of examples to be learnt, it is theoretically able to learn these examples exactly. If, however, the generalized delta rule (or back pr,~oagation) is used as the learning algorithm in numerical experiments, a network's learning aptitude generally aeclines with increasing number of internal nodes. The approach to iterate the solvability condition for accurate learning, instead of using total error minimization, results in learning algorithms in which learning aptitude increases with the number of internal nodes. At the same time, these methods enable further nodes to be added dynamically in a particularly simple manner. A numerical implementation showed that, if the solvability condition was valid, the algorithm was able to learn the learning set to the limits of computer accuracy in all cases tested, and thus, especially, did not get caught up in local minima of the error function. Furthermore, the convergence speed is considerably higher than that of back propagation.


📜 SIMILAR VOLUMES


A learning algorithm for oscillatory cel
✍ C.Y. Ho; H. Kurokawa 📂 Article 📅 1999 🏛 Elsevier Science 🌐 English ⚖ 566 KB

We present a cellular type oscillatory neural network for temporal segregation of stationary input patterns. The model comprises an array of locally connected neural oscillators with connections limited to a 4-connected neighborhood. The architecture is reminiscent of the wellknown cellular neural n

FUNCOM: A constrained learning algorithm
✍ Paris Mastorocostas; John Theocharis 📂 Article 📅 2000 🏛 Elsevier Science 🌐 English ⚖ 384 KB

A novel learning algorithm, the FUNCOM (Fuzzy Neural Constrained Optimization Method) is suggested in this paper, for training fuzzy neural networks. The training task is formulated as a constrained optimization problem, whose objective is twofold: (i) minimization of an error measure, leading to su

A learning algorithm for multilayered ne
✍ Friedrich Biegler-König; Frank Bärmann 📂 Article 📅 1993 🏛 Elsevier Science 🌐 English ⚖ 365 KB

An algorithm ./or the training of mtdtilayered neural networks solely based on linear algebraic methods is presented. Its convergence speed up to a certain limit t~flearning accura~3' is orders o./magnitude better than that of the classical back propagation. Furthermore. its learning aptitude increa

A genetic algorithm with deterministic m
✍ Minoru Fukumi; Norio Akamatsu 📂 Article 📅 1998 🏛 John Wiley and Sons 🌐 English ⚖ 189 KB 👁 2 views

This paper presents a method for designing neural networks using a genetic algorithm (GA) with deterministic mutation (DM) based on learning. The GA presented in this paper has a large framework including DM, which is performed on the basis of the results from neural network learning. It can achieve

Efficient Partition of Learning Data Set
✍ Igor V. Tetko; Alessandro E.P. Villa 📂 Article 📅 1997 🏛 Elsevier Science 🌐 English ⚖ 641 KB

This study investigates the emerging possibilities of combining unsupervised and supervised learning in neural network ensembles. Such strategy is used to get an efficient partition of a noisy input data set in order to focus the training of neural networks on the most complex and informative domain

An efficient concurrent implementation o
✍ R. Andonie; A. T. Chronopoulos; D. Grosu; H. Galmeanu 📂 Article 📅 2006 🏛 John Wiley and Sons 🌐 English ⚖ 202 KB

The focus of this study is how we can efficiently implement the neural network backpropagation algorithm on a network of computers (NOC) for concurrent execution. We assume a distributed system with heterogeneous computers and that the neural network is replicated on each computer. We propose an arc