๐”– Bobbio Scriptorium
โœฆ   LIBER   โœฆ

Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting

โœ Scribed by Srinivas, N.; Krause, A.; Kakade, S.M.; Seeger, M.


Book ID
114643062
Publisher
IEEE
Year
2012
Tongue
English
Weight
963 KB
Volume
58
Category
Article
ISSN
0018-9448

No coin nor oath required. For personal study only.

โœฆ Synopsis


Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound ( -) algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, -compares favorably with other heuristical GP optimization approaches.


๐Ÿ“œ SIMILAR VOLUMES