𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Adaptive and Self-Confident On-Line Learning Algorithms

✍ Scribed by Peter Auer; Nicolò Cesa-Bianchi; Claudio Gentile


Publisher
Elsevier Science
Year
2002
Tongue
English
Weight
216 KB
Volume
64
Category
Article
ISSN
0022-0000

No coin nor oath required. For personal study only.

✦ Synopsis


We study on-line learning in the linear regression framework. Most of the performance bounds for on-line algorithms in this framework assume a constant learning rate. To achieve these bounds the learning rate must be optimized based on a posteriori information. This information depends on the whole sequence of examples and thus it is not available to any strictly on-line algorithm. We introduce new techniques for adaptively tuning the learning rate as the data sequence is progressively revealed. Our techniques allow us to prove essentially the same bounds as if we knew the optimal learning rate in advance. Moreover, such techniques apply to a wide class of on-line algorithms, including p-norm algorithms for generalized linear regression and Weighted Majority for linear regression with absolute loss. Our adaptive tunings are radically different from previous techniques, such as the so-called doubling trick. Whereas the doubling trick restarts the on-line algorithm several times using a constant learning rate for each run, our methods save information by changing the value of the learning rate very smoothly. In fact, for Weighted Majority over a finite set of experts our analysis provides a better leading constant than the doubling trick.


📜 SIMILAR VOLUMES


Adaptive stepsize algorithms for on-line
✍ G.D. Magoulas; V.P. Plagianakos; M.N. Vrahatis 📂 Article 📅 2001 🏛 Elsevier Science 🌐 English ⚖ 312 KB

In this paper a method for adapting the stepsize in on-line network training is presented. The proposed technique derives from the stochastic gradient descent proposed by Almeida et al. [On-line Learning in Neural Networks, 111-134, Cambridge University Press, 1998]. The new aspect of our approach c

Prediction algorithms and confidence mea
✍ Alex Gammerman; Volodya Vovk 📂 Article 📅 2002 🏛 Elsevier Science 🌐 English ⚖ 104 KB

This paper reviews some theoretical and experimental developments in building computable approximations of Kolmogorov's algorithmic notion of randomness. Based on these approximations a new set of machine learning algorithms have been developed that can be used not just to make predictions but also

A self-learning and tuning fuzzy logic c
✍ Hung-Yuan Chung; Chih-Kuan Chiang 📂 Article 📅 1997 🏛 John Wiley and Sons 🌐 English ⚖ 237 KB 👁 2 views

This article presents a new method for learning and tuning a fuzzy logic controller automatically. A reinforcement learning and a genetic algorithm are used in conjunction with a multilayer neural network model of a fuzzy logic controller, which can automatically generate the fuzzy control rules and

Learning chaotic dynamics under noise wi
✍ Wako Yoshida; Shin Ishii; Masa-aki Sato 📂 Article 📅 2001 🏛 John Wiley and Sons 🌐 English ⚖ 304 KB

In this article, we discuss the learning of chaotic dynamics by using a normalized Gaussian network (NGnet). The NGnet is trained by an on-line EM algorithm in order to learn the vector field of the chaotic dynamics. We also investigate the robustness of our approach to two kinds of noise processes: