State generalization method with support vector machines in reinforcement learning
โ Scribed by Ryo Goto; Hiroshi Matsuo
- Publisher
- John Wiley and Sons
- Year
- 2006
- Tongue
- English
- Weight
- 1019 KB
- Volume
- 37
- Category
- Article
- ISSN
- 0882-1666
No coin nor oath required. For personal study only.
โฆ Synopsis
Abstract
A discrete state space is often the subject of conventional reinforcement learning; continuous states must be discretized in order to be handled with conventional learning methods. However, simple discretization increases the state dimensionality and results in an exponential increase in the number of states. The time needed for learning and the memory requirements are therefore greatly increased. In this paper, the authors propose an algorithm for generalizing multidimensional continuous states using a support vector machine (SVM). This algorithm estimates the optimal action in an unknown state using SVM and can be expected to adapt to the environment in a smaller number of trials. To compare this method with conventional algorithms, a simulation experiment was carried out for an assumed task in which a robot is caused to move toward a goal. As a result, it was confirmed that this algorithm adapted to the environment in a smaller number of trials. ยฉ 2006 Wiley Periodicals, Inc. Syst Comp Jpn, 37(9): 77โ86, 2006; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.20140
๐ SIMILAR VOLUMES