Adaptive Markov Control Processes (Applied Mathematical Sciences)
β Scribed by Onesimo Hernandez-Lerma
- Publisher
- Springer
- Year
- 1989
- Tongue
- English
- Leaves
- 164
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Table of Contents
Cover......Page 1
Volume in Series......Page 2
Title page......Page 3
Copyright page......Page 4
Preface......Page 7
Contents......Page 9
Summary of Notation and Terminology......Page 13
Control Models......Page 15
Policies......Page 17
Performance Criteria......Page 20
Control Problems......Page 21
1.3 Examples......Page 23
An Inventory/Production System......Page 24
Control of Water Reservoirs......Page 25
Fisheries Management......Page 26
Nonstationary MCM's......Page 27
Semi-Markov Control Models......Page 28
1.4 Further Comments......Page 29
Summary......Page 31
2.2 Optimality Conditions......Page 32
Continuity of $v^\ast$......Page 37
2.3 Asymptotic Discount Optimality......Page 38
Nonstationary Value-Iteration......Page 41
Finite-State Approximations......Page 46
Preliminaries......Page 48
Nonstationary Value-Iteration......Page 49
The Principle of Estimation and Control......Page 52
Adaptive Policies......Page 53
2.6 Nonparametric Adaptive Control......Page 54
The Parametric Approach......Page 55
New Setting......Page 56
The Empirical Distribution Process......Page 58
Nonparametric Adaptive Policies......Page 59
2.7 Comments and References......Page 61
3.1 Introduction......Page 65
3.2 The Optimality Equation......Page 66
3.3 Ergodicity Conditions......Page 70
Uniform Approximations......Page 76
Successive Averagings......Page 80
3.5 Approximating Models......Page 81
3.6 Nonstationary Value Iteration......Page 85
Nonstationary Successive Averagings......Page 89
Discounted-Like NVI......Page 90
Preliminaries......Page 91
Nonstationary Value Iteration (NVI)......Page 93
3.8 Comments and References......Page 95
Summary......Page 97
4.2 PO-CM: Case of Known Parameters......Page 98
4.3 Transformation into a CO Control Problem......Page 100
$I$-Policies......Page 102
The New Control Model......Page 103
4.4 Optimal $I$-Policies......Page 104
4.5 PO-CM's with Unknown Parameters......Page 107
PEC and NVI $I$-Policies......Page 109
4.6 Comments and References......Page 110
Summary......Page 112
5.2 Contrast Functions......Page 113
5.3 Minimum Contrast Estimators......Page 115
5.4 Comments and References......Page 119
6.2 Preliminaries......Page 121
A Non-Recursive Procedure......Page 123
A Recursive Procedure......Page 125
Preliminaries......Page 127
Discretization of the PEC Adaptive Policy......Page 128
Discretization of the NVI Adaptive Policy......Page 129
The Non-Adaptive Case......Page 130
The Adaptive Case......Page 133
6.6 Comments and References......Page 135
Appendix A. Contraction Operators......Page 136
Total Variation Norm......Page 138
Weak Convergence......Page 139
Appendix C. Stochastic Kernels......Page 141
Multifunctions......Page 143
References......Page 146
Author Index......Page 157
Subject Index......Page 161
Applied Mathematical Sciences......Page 163
π SIMILAR VOLUMES
<p>This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applic
<p><P>The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations r
<P>Aims to give to the reader the tools necessary to apply semi-Markov processes in real-life problems.</P> <P>The book is self-contained and, starting from a low level of probability concepts, gradually brings the reader to a deep knowledge of semi-Markov processes.</P> <P>Presents homogeneous an
<p><P><STRONG>Applied Semi-Markov Processes </STRONG>aims to give to the reader the tools necessary to apply semi-Markov processes in real-life problems. The book is self-contained and, starting from a low level of probability concepts, gradually brings the reader to a deep knowledge of semi-Markov