One of the difficulties encountered in the application of the reinforcement learning to real-world problems is the construction of a discrete state space from a continuous sensory input signal. In the absence of a priori knowledge about the task, a straightforward approach to this problem is to disc
β¦ LIBER β¦
Adaptive co-construction of state and action spaces in reinforcement learning
β Scribed by Masato Nagayoshi; Hajime Murao; Hisashi Tamaki
- Publisher
- Springer Japan
- Year
- 2011
- Tongue
- English
- Weight
- 322 KB
- Volume
- 16
- Category
- Article
- ISSN
- 1433-5298
No coin nor oath required. For personal study only.
π SIMILAR VOLUMES
Adaptive internal state space constructi
β
K. Samejima; T. Omori
π
Article
π
1999
π
Elsevier Science
π
English
β 783 KB
Adaptive quality of service-based routin
β
Abdelhamid Mellouk; SaΓ―d HoceΓ―ni; Yacine Amirat
π
Article
π
2007
π
John Wiley and Sons
π
English
β 549 KB
## Abstract In this paper, we propose two adaptive routing algorithms based on reinforcement learning. In the first algorithm, we have used a neural network to approximate the reinforcement signal, allowing the learner to take into account various parameters such as local queue size, for distance e
Ergodic Control of a Singularly Perturbe
β
T. R. Bielecki; L. Stettner
π
Article
π
1998
π
Springer
π
English
β 193 KB
Learning from Experiences in Adaptive Ac
β
Federica Ravera; Klaus Hubacek; Mark Reed; David TarrasΓ³n
π
Article
π
2011
π
Wiley (John Wiley & Sons)
π
English
β 420 KB
π 2 views
Theory of symmetry in the quantum mechan
β
R.N. Sen
π
Article
π
1978
π
Elsevier Science
π
English
β 689 KB
Hypoxic pulmonary steady-state diffusing
β
Z. Turek; A. Frans; F. Kreuzer
π
Article
π
1972
π
Springer
π
English
β 581 KB