<p>This book presents the second part of a two-volume series devoted to a sys tematic exposition of some recent developments in the theory of discrete time Markov control processes (MCPs). As in the first part, hereafter re ferred to as "Volume I" (see Hernandez-Lerma and Lasserre [1]), interest
Discrete-Time Markov Control Processes: Basic Optimality Criteria
✍ Scribed by Onésimo Hernández-Lerma, Jean Bernard Lasserre (auth.)
- Publisher
- Springer-Verlag New York
- Year
- 1996
- Tongue
- English
- Leaves
- 222
- Series
- Applications of Mathematics 30
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
✦ Synopsis
This book presents the first part of a planned two-volume series devoted to a systematic exposition of some recent developments in the theory of discrete-time Markov control processes (MCPs). Interest is mainly confined to MCPs with Borel state and control (or action) spaces, and possibly unbounded costs and noncompact control constraint sets. MCPs are a class of stochastic control problems, also known as Markov decision processes, controlled Markov processes, or stochastic dynamic pro grams; sometimes, particularly when the state space is a countable set, they are also called Markov decision (or controlled Markov) chains. Regardless of the name used, MCPs appear in many fields, for example, engineering, economics, operations research, statistics, renewable and nonrenewable re source management, (control of) epidemics, etc. However, most of the lit erature (say, at least 90%) is concentrated on MCPs for which (a) the state space is a countable set, and/or (b) the costs-per-stage are bounded, and/or (c) the control constraint sets are compact. But curiously enough, the most widely used control model in engineering and economics--namely the LQ (Linear system/Quadratic cost) model-satisfies none of these conditions. Moreover, when dealing with "partially observable" systems) a standard approach is to transform them into equivalent "completely observable" sys tems in a larger state space (in fact, a space of probability measures), which is uncountable even if the original state process is finite-valued.
✦ Table of Contents
Front Matter....Pages i-xiv
Introduction and Summary....Pages 1-12
Markov Control Processes....Pages 13-21
Finite-Horizon Problems....Pages 23-42
Infinite-Horizon Discounted-Cost Problems....Pages 43-73
Long-Run Average-Cost Problems....Pages 75-124
The Linear Programming Formulation....Pages 125-167
Back Matter....Pages 169-216
✦ Subjects
Probability Theory and Stochastic Processes
📜 SIMILAR VOLUMES
This research monograph is the authoritative and comprehensive treatment of the mathematical foundations of stochastic optimal control of discrete-time systems, including the treatment of the intricate measure-theoretic issues.
This research monograph is the authoritative and comprehensive treatment of the mathematical foundations of stochastic optimal control of discrete-time systems, including the treatment of the intricate measure-theoretic issues.