𝔖 Scriptorium
✦   LIBER   ✦

πŸ“

Simulation-Based Algorithms for Markov Decision Processes

✍ Scribed by Hyeong Soo Chang, Jiaqiao Hu, Michael C. Fu, Steven I. Marcus


Publisher
Imprint; Springer; Springer London
Year
2013
Tongue
English
Leaves
240
Series
Communications and Control Engineering
Edition
2
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Table of Contents


Simulation-Based Algorithms for Markov Decision Processes
Preface to the 2nd Edition
Contents
Selected Notation and Abbreviations
Chapter 1: Markov Decision Processes
1.1 Optimality Equations
1.2 Policy Iteration and Value Iteration
1.3 Rolling-Horizon Control
1.4 Survey of Previous Work on Computational Methods
1.5 Simulation
1.6 Preview of Coming Attractions
1.7 Notes
Chapter 2: Multi-stage Adaptive Sampling Algorithms
2.1 Upper Confidence Bound Sampling
2.1.1 Regret Analysis in Multi-armed Bandits
2.1.2 Algorithm Description
2.1.3 Alternative Estimators
2.1.4 Convergence Analysis
2.1.5 Numerical Example
2.2 Pursuit Learning Automata Sampling
2.2.1 Algorithm Description
2.2.2 Convergence Analysis
2.2.3 Application to POMDPs
2.2.4 Numerical Example
2.3 Notes
Chapter 3: Population-Based Evolutionary Approaches
3.1 Evolutionary Policy Iteration
3.1.1 Policy Switching
3.1.2 Policy Mutation and Population Generation
3.1.3 Stopping Rule
3.1.4 Convergence Analysis
3.1.5 Parallelization
3.2 Evolutionary Random Policy Search
3.2.1 Policy Improvement with Reward Swapping
3.2.2 Exploration
3.2.3 Convergence Analysis
3.3 Numerical Examples
3.3.1 A One-Dimensional Queueing Example
3.3.1.1 Discrete Action Space
3.3.1.2 Continuous Action Space
3.3.2 A Two-Dimensional Queueing Example
3.4 Extension to Simulation-Based Setting
3.5 Notes
Chapter 4: Model Reference Adaptive Search
4.1 The Model Reference Adaptive Search Method
4.1.1 The MRAS0 Algorithm (Idealized Version)
4.1.1.1 Natural Exponential Family
4.1.2 The MRAS1 Algorithm (Adaptive Monte Carlo Version)
4.1.3 The MRAS2 Algorithm (Stochastic Optimization)
4.2 Convergence Analysis of MRAS
4.2.1 MRAS0 Convergence
4.2.2 MRAS1 Convergence
4.2.3 MRAS2 Convergence
4.3 Application of MRAS to MDPs via Direct Policy Learning
4.3.1 Finite-Horizon MDPs
4.3.2 Infinite-Horizon MDPs
4.3.3 MDPs with Large State Spaces
4.3.4 Numerical Examples
4.3.4.1 An Inventory Control Example
4.3.4.2 A Controlled Queueing Example
4.3.4.3 An Inventory Control Problem with Continuous Demand
4.4 Application of MRAS to Infinite-Horizon MDPs in Population-Based Evolutionary Approaches
4.4.1 Algorithm Description
4.4.2 Numerical Examples
4.5 Application of MRAS to Finite-Horizon MDPs Using Adaptive Sampling
4.6 A Stochastic Approximation Framework
4.6.1 Model-Based Annealing Random Search
4.6.1.1 Global Convergence of MARS1
4.6.1.2 Asymptotic Normality of MARS1
4.6.2 Application of MARS to Finite-Horizon MDPs
4.6.2.1 Convergence Analysis
4.6.2.2 A Numerical Example
4.7 Notes
Chapter 5: On-Line Control Methods via Simulation
5.1 Simulated Annealing Multiplicative Weights Algorithm
5.1.1 Basic Algorithm Description
5.1.2 Convergence Analysis
5.1.3 Convergence of the Sampling Version of the Algorithm
5.1.4 Numerical Example
5.1.5 Simulated Policy Switching
5.2 Rollout
5.2.1 Parallel Rollout
5.3 Hindsight Optimization
5.3.1 Numerical Example
5.4 Approximate Stochastic Annealing
5.4.1 Convergence Analysis
5.4.2 Numerical Example
5.5 Notes
References
Index


πŸ“œ SIMILAR VOLUMES


Simulation-Based Algorithms for Markov D
✍ Hyeong Soo Chang, Jiaqiao Hu, Michael C. Fu, Steven I. Marcus (auth.) πŸ“‚ Library πŸ“… 2013 πŸ› Springer-Verlag London 🌐 English

<p>Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of d

Simulation-based Algorithms for Markov D
✍ Hyeong Soo Chang, Michael C. Fu, Jiaqiao Hu, Steven I. Marcus πŸ“‚ Library πŸ“… 2007 πŸ› Springer 🌐 English

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. It is well-known that many real-world problems modeled by MDPs have huge state and/or action spaces, leading to the n

Simulation-based Algorithms for Markov D
✍ Hyeong Soo Chang, Michael C. Fu, Jiaqiao Hu, Steven I. Marcus, πŸ“‚ Library πŸ“… 2007 🌐 English

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. This book brings the state-of-the-art research together for the first time. It provides practical modeling methods fo

Markov Decision Processes
✍ D. J. White πŸ“‚ Library πŸ“… 1993 πŸ› John Wiley & Sons 🌐 English

Examines several fundamentals concerning the manner in which Markov decision problems may be properly formulated and the determination of solutions or their properties. Coverage includes optimal equations, algorithms and their characteristics, probability distributions, modern development in the Mar