๐”– Scriptorium
โœฆ   LIBER   โœฆ

๐Ÿ“

Simulation-Based Algorithms for Markov Decision Processes

โœ Scribed by Hyeong Soo Chang, Jiaqiao Hu, Michael C. Fu, Steven I. Marcus (auth.)


Publisher
Springer-Verlag London
Year
2013
Tongue
English
Leaves
240
Series
Communications and Control Engineering
Edition
2
Category
Library

โฌ‡  Acquire This Volume

No coin nor oath required. For personal study only.

โœฆ Synopsis


Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable. In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search.
This substantially enlarged new edition reflects the latest developments in novel algorithms and their underpinning theories, and presents an updated account of the topics that have emerged since the publication of the first edition. Includes:
innovative material on MDPs, both in constrained settings and with uncertain transition properties;
game-theoretic method for solving MDPs;
theories for developing roll-out based algorithms; and
details of approximation stochastic annealing, a population-based on-line simulation-based algorithm.
The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling, and control, and simulation but will be a valuable source of tuition and reference for students of control and operations research.

โœฆ Table of Contents


Front Matter....Pages I-XVII
Markov Decision Processes....Pages 1-17
Multi-stage Adaptive Sampling Algorithms....Pages 19-60
Population-Based Evolutionary Approaches....Pages 61-87
Model Reference Adaptive Search....Pages 89-177
On-Line Control Methods via Simulation....Pages 179-218
Back Matter....Pages 219-229

โœฆ Subjects


Control; Systems Theory, Control; Probability Theory and Stochastic Processes; Operations Research, Management Science; Algorithm Analysis and Problem Complexity; Operation Research/Decision Theory


๐Ÿ“œ SIMILAR VOLUMES


Simulation-based Algorithms for Markov D
โœ Hyeong Soo Chang, Michael C. Fu, Jiaqiao Hu, Steven I. Marcus ๐Ÿ“‚ Library ๐Ÿ“… 2007 ๐Ÿ› Springer ๐ŸŒ English

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. It is well-known that many real-world problems modeled by MDPs have huge state and/or action spaces, leading to the n

Simulation-based Algorithms for Markov D
โœ Hyeong Soo Chang, Michael C. Fu, Jiaqiao Hu, Steven I. Marcus, ๐Ÿ“‚ Library ๐Ÿ“… 2007 ๐ŸŒ English

Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. This book brings the state-of-the-art research together for the first time. It provides practical modeling methods fo

Markov Decision Processes
โœ D. J. White ๐Ÿ“‚ Library ๐Ÿ“… 1993 ๐Ÿ› John Wiley & Sons ๐ŸŒ English

Examines several fundamentals concerning the manner in which Markov decision problems may be properly formulated and the determination of solutions or their properties. Coverage includes optimal equations, algorithms and their characteristics, probability distributions, modern development in the Mar