๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

Training neural networks with heterogeneous data

โœ Scribed by John A. Drakopoulos; Ahmad Abdulkader


Book ID
103853745
Publisher
Elsevier Science
Year
2005
Tongue
English
Weight
140 KB
Volume
18
Category
Article
ISSN
0893-6080

No coin nor oath required. For personal study only.

โœฆ Synopsis


Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the 'axiomatic' basis of our theory. Both methods have been applied to a time-delay neural network-which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian.


๐Ÿ“œ SIMILAR VOLUMES


Data qualification: Logic analysis appli
โœ Bryan P. Bergeron; Richard S. Shiffman; Ronald L. Rouse ๐Ÿ“‚ Article ๐Ÿ“… 1994 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 704 KB

Abatraet-For neural networks to develop good internal representations for pattern mapping, noise in the training set data must be controlled. Because of the many difficulties associated with manually validating training data, we have focused on using decision table techniques as a practical, domain-

Dynamic neural networks with data assimi
โœ Henk van den Boogaard; Arthur Mynett ๐Ÿ“‚ Article ๐Ÿ“… 2004 ๐Ÿ› John Wiley and Sons ๐ŸŒ English โš– 142 KB

Neural networks (NNs) are often used as black-box techniques for the modelling of system relations. Standard NNs are static models, whereas in practice one often has to deal with dynamic systems or processes. In such cases, dynamic neural networks (DNNs) may be better suited. We will argue that the

Efficient Partition of Learning Data Set
โœ Igor V. Tetko; Alessandro E.P. Villa ๐Ÿ“‚ Article ๐Ÿ“… 1997 ๐Ÿ› Elsevier Science ๐ŸŒ English โš– 641 KB

This study investigates the emerging possibilities of combining unsupervised and supervised learning in neural network ensembles. Such strategy is used to get an efficient partition of a noisy input data set in order to focus the training of neural networks on the most complex and informative domain