Learning of visual modules from examples: A framework for understanding adaptive visual performance
โ Scribed by Tomaso Poggio; Shimon Edelman; Manfred Fahle
- Publisher
- Elsevier Science
- Year
- 1992
- Weight
- 999 KB
- Volume
- 56
- Category
- Article
- ISSN
- 1049-9660
No coin nor oath required. For personal study only.
โฆ Synopsis
Networks that solve specific visual tasks, such as the evaluation of spatial relations with hyperacuity precision, can be easily synthesized from a small set of examples. The present paper describes a series of simulated psychophysical experiments that replicate human performance in hyperacuity tasks. The experiments were conducted with a detailed computational model of perceptual learning, based on HyperBF interpolation. The success of the simulations provides a new angle on the purposive aspect of human vision, in which the capability for solving any given task emerges only if the need for it is dictated by the environment. We conjecture that almost any tractable psychophysical task can be performed better after suitable training, provided the necessary information is available in the stimulus.
๐ SIMILAR VOLUMES