Optimising memory usage in n-tuple neural networks
β Scribed by R.J. Mitchell; J.M. Bishop; P.R. Minchinton
- Publisher
- Elsevier Science
- Year
- 1996
- Tongue
- English
- Weight
- 1003 KB
- Volume
- 40
- Category
- Article
- ISSN
- 0378-4754
No coin nor oath required. For personal study only.
β¦ Synopsis
The use of n-tuple or weightless neural networks as pattern recognition devices is well known (Aleksander and Stonham, 1979). They have some significant advantages over the more common and biologically plausible networks, such as multi-layer perceptrons; for example, n-tuple networks have been used for a variety of tasks, the most popular being real-time pattern recognition, and they can be implemented easily in hardware as they use standard random access memories.
In operation, a series of images of an object are shown to the network, each being processed suitably and effectively stored in a memory called a discriminator. Then, when another image is shown to the system, it is processed in a similar manner and the system reports whether it recognises the image; is the image sufficiently similar to one already taught?
If the system is to be able to recognise and discriminate between m-objects, then it must contain m-discriminators. This can require a great deal of memory.
This paper describes various ways in which memory requirements can be reduced, including a novel method for multiple discriminator n-tuple networks used for pattern recognition. By using this method, the memory normally required to handle m-objects can be used to recognise and discriminate between 2" -2 objects.
π SIMILAR VOLUMES
For the purpose of dynamic systems modeling, it was proposed to include feedback connections or delay elements in the classical feed-forward neural network structure so that the present output of the neural network depends on its previous values. These delay elements can be connected to the hidden a
The results of a computer study of the continuous-time version of macrodynamical system of equations governing the recalling process of associative memory neural networks are presented. The comparative analysis of two models of associative memory network--recurrent (autoassociative) and layered (fee