A modern Bayesian look at the multi-armed bandit
โ Scribed by Steven L. Scott
- Publisher
- John Wiley and Sons
- Year
- 2010
- Tongue
- English
- Weight
- 794 KB
- Volume
- 26
- Category
- Article
- ISSN
- 1524-1904
- DOI
- 10.1002/asmb.874
No coin nor oath required. For personal study only.
โฆ Synopsis
Abstract
A multiโarmed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multiโarmed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances in Bayesian computation have made randomized probability matching easy to apply to virtually any payoff distribution. This flexibility frees the experimenter to work with payoff distributions that correspond to certain classical experimental designs that have the potential to outperform methods that are โoptimalโ in simpler contexts. I summarize the relationships between randomized probability matching and several related heuristics that have been used in the reinforcement learning literature. Copyright ยฉ 2010 John Wiley & Sons, Ltd.
๐ SIMILAR VOLUMES
The attempt in this paper is to point out the more recent history of the interaction of biological and psychological explanation of animal behavior, and, to examine the reasons for the relative decline in productive research in comparative psychology against the surge of ethology in this century. In
## Abstract The controversy surrounding the alleged Lamarckian fraud of Paul Kammerer's midwife toad experiments has intrigued generations of biologists. A reโexamination of his descriptions of hybrid crosses of treated and nontreated toads reveals parentโofโorigin effects like those documented in