𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Incremental Learning from Positive Data

✍ Scribed by Steffen Lange; Thomas Zeugmann


Publisher
Elsevier Science
Year
1996
Tongue
English
Weight
835 KB
Volume
53
Category
Article
ISSN
0022-0000

No coin nor oath required. For personal study only.

✦ Synopsis


The present paper deals with a systematic study of incremental learning algorithms. The general scenario is as follows. Let c be any concept; then every infinite sequence of elements exhausting c is called positive presentation of c. An algorithmic learner successively takes as input one element of a positive presentation as well as its previously made hypothesis at a time and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the concept to be learned. This basic scenario is referred to as iterative learning. Iterative inference can be refined by allowing the learner to store an a priori bounded number of carefully chosen examples resulting in bounded example memory inference. Additionally, feed-back identification is introduced. Now, the learner is enabled to ask whether or not a particular element did already appear in the data provided so far. Our results are threefold: First, the learning capabilities of the various models of incremental learning are related to previously studied learning models. It is proved that incremental learning can be always simulated by inference devices that are both setdriven and conservative. Second, feed-back learning is shown to be more powerful than iterative inference, and its learning power is incomparable to that of bounded example memory inference which itself extends that of iterative learning, too. In particular, the learning power of bounded example memory inference always increases if the number of examples the learner is allowed to store is incremented. Third, a sufficient condition for iterative inference allowing non-enumerative learning is provided. The results obtained provide strong evidence that there is no unique way to design superior incremental learning algorithms. Instead, incremental learning is the art of knowing what to overlook.


πŸ“œ SIMILAR VOLUMES


Intrinsic complexity of learning geometr
✍ Sanjay Jain; Efim Kinber πŸ“‚ Article πŸ“… 2003 πŸ› Elsevier Science 🌐 English βš– 666 KB

Intrinsic complexity is used to measure the complexity of learning areas limited by broken-straight lines (called open semi-hulls) and intersections of such areas. Any strategy learning such geometrical concepts can be viewed as a sequence of primitive basic strategies. Thus, the length of such a se

Learning from imperfect data
✍ Pitoyo Hartono; Shuji Hashimoto πŸ“‚ Article πŸ“… 2007 πŸ› Elsevier Science 🌐 English βš– 583 KB
Learning concepts from data
✍ Donald Michie πŸ“‚ Article πŸ“… 1998 πŸ› Elsevier Science 🌐 English βš– 444 KB

Current data-mining practice employs relatively low-level machine learning algorithms-statistical, neural-net, genetic, decision-tree, etc.-to trawl large data-sets for new classifiers. Usefulness of classifiers is then assessed according to accuracy in classifying new data, e.g. for stockmarket pre

Learning indistinguishability from data
✍ F. HΓΆppner; F. Klawonn; P. Eklund πŸ“‚ Article πŸ“… 2002 πŸ› Springer 🌐 English βš– 294 KB
Learning Fuzzy Rules from Data
✍ G.D. Finn πŸ“‚ Article πŸ“… 1999 πŸ› Springer-Verlag 🌐 English βš– 135 KB