𝔖 Bobbio Scriptorium
✦   LIBER   ✦

On learning of functions refutably

✍ Scribed by Sanjay Jain; Efim Kinber; Rolf Wiehagen; Thomas Zeugmann


Publisher
Elsevier Science
Year
2003
Tongue
English
Weight
308 KB
Volume
298
Category
Article
ISSN
0304-3975

No coin nor oath required. For personal study only.

✦ Synopsis


Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. Furthermore, all these types are closed under union, though in di erent strengths. Also, these types are shown to be di erent with respect to their intrinsic complexity; two of them do not contain function classes that are "most di cult" to learn, while the third one does. Moreover, we present several characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general.

For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. From this we derive some hierarchies for refutable learning. Finally, we prove that in general one cannot trade stricter refutability constraints for more liberal learning criteria.


πŸ“œ SIMILAR VOLUMES


Architectural constraints on learning an
✍ Ann M Hermundstad; Kevin S Brown; Danielle S Bassett; Jean M Carlson πŸ“‚ Article πŸ“… 2011 πŸ› BioMed Central 🌐 English βš– 137 KB

Recent experimental and computational studies have identified relationships between architecture and functional performance in information processing systems ranging from natural neuronal ensembles [1,2] to artificial neural networks [3,4]. While these systems can vary greatly in their size and comp