<p><span>This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management
An Introduction to Optimal Control Theory. The Dynamic Programming Approach
✍ Scribed by Onésimo Hernández-Lerma, Leonardo R. Laura-Guarachi, Saul Mendoza-Palacios, David González-Sánchez
- Publisher
- Springer
- Year
- 2023
- Tongue
- English
- Leaves
- 280
- Series
- Texts in Applied Mathematics; 76
- Category
- Library
No coin nor oath required. For personal study only.
✦ Table of Contents
1
Preface
Contents
978-3-031-21139-3_1
1 Introduction: Optimal Control Problems
978-3-031-21139-3_2
2 Discrete–Time Deterministic Systems
2.1 The Dynamic Programming Equation
2.2 The DP Equation and Related Topics
2.2.1 Variants of the DP Equation
2.2.2 The Minimum Principle
2.3 Infinite–Horizon Problems
2.3.1 Discounted Case
2.3.2 The Minimum Principle
2.3.3 The Weighted-Norm Approach
2.4 Approximation Algorithms
2.4.1 Value Iteration
2.4.2 Policy Iteration
2.5 Long–Run Average Cost Problems
2.5.1 The AC Optimality Equation
2.5.2 The Steady–State Approach
2.5.3 The Vanishing Discount Approach
978-3-031-21139-3_3
3 Discrete–Time Stochastic Control Systems
3.1 Stochastic Control Models
3.2 Markov Control Processes: Finite Horizon
3.3 Conditions for the Existence of Measurable Minimizers
3.4 Examples
3.5 Infinite–Horizon Discounted Cost Problems
3.6 Policy Iteration
3.7 Long-Run Average Cost Problems
3.7.1 The Average Cost Optimality Inequality
3.7.2 The Average Cost Optimality Equation
3.7.3 Examples
978-3-031-21139-3_4
4 Continuous–Time Deterministic Systems
4.1 The HJB Equation and Related Topics
4.1.1 Finite–Horizon Problems: The HJB Equation
4.1.2 A Minimum Principle from the HJB Equation
4.2 The Discounted Case
4.3 Infinite–Horizon Discounted Cost
4.4 Long-Run Average Cost Problems
4.4.1 The Average Cost Optimality Equation (ACOE)
4.4.2 The Steady-State Approach
4.4.3 The Vanishing Discount Approach
4.5 The Policy Improvement Algorithm
4.5.1 The PIA: Discounted Cost Problems
4.5.2 The PIA: Average Cost Problems
978-3-031-21139-3_5
5 Continuous–Time Markov Control Processes
5.1 Markov Processes
5.2 The Infinitesimal Generator
5.3 Markov Control Processes
5.4 The Dynamic Programming Approach
5.5 Long–Run Average Cost Problems
5.5.1 The Ergodicity Approach
5.5.2 The Vanishing Discount Approach
978-3-031-21139-3_6
6 Controlled Diffusion Processes
6.1 Diffusion Processes
6.2 Controlled Diffusion Processes
6.3 Examples: Finite Horizon
6.4 Examples: Discounted Costs
6.5 Examples: Average Costs
1 (1)
Appendix A Terminology and Notation
Lower Semicontinuous Functions
Appendix B 26ptExistence of Measurable Minimizers
Appendix C 26ptMarkov Processes
Continuous–Time Markov Processes
Theorem of C. Ionescu–Tulcea
Appendix Bibliography
Index
📜 SIMILAR VOLUMES
<p><span>This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management
This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management Science,
<p>This paper is intended for the beginner. It is not a state of-the-art paper for research workers in the field of control theory. Its purpose is to introduce the reader to some of the problems and results in control theory, to illustrate the application of these re sults, and to provide a guide