The book presents a comprehensive development of effective numerical methods for stochastic control problems in continuous time. The process models are diffusions, jump-diffusions or reflected diffusions of the type that occur in the majority of current applications. All the usual problem formulatio
Numerical Methods for Stochastic Control Problems in Continuous Time
β Scribed by Harold J. Kushner, Paul Dupuis (auth.)
- Publisher
- Springer-Verlag New York
- Year
- 2001
- Tongue
- English
- Leaves
- 480
- Series
- Stochastic Modelling and Applied Probability 24
- Edition
- 2
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
Changes in the second edition. The second edition differs from the first in that there is a full development of problems where the variance of the diffusion term and the jump distribution can be controlled. Also, a great deal of new material concerning deterministic problems has been added, including very efficient algorithms for a class of problems of wide current interest. This book is concerned with numerical methods for stochastic control and optimal stochastic control problems. The random process models of the controlled or uncontrolled stochastic systems are either diffusions or jump diffusions. Stochastic control is a very active area of research and new problem formulations and sometimes surprising applications appear reguΒ larly. We have chosen forms of the models which cover the great bulk of the formulations of the continuous time stochastic control problems which have appeared to date. The standard formats are covered, but much emphasis is given to the newer and less well known formulations. The controlled process might be either stopped or absorbed on leaving a constraint set or upon first hitting a target set, or it might be reflected or "projected" from the boundary of a constraining set. In some of the more recent applications of the reflecting boundary problem, for example the so-called heavy traffic approximation problems, the directions of reflection are actually discontinΒ uous. In general, the control might be representable as a bounded function or it might be of the so-called impulsive or singular control types.
β¦ Table of Contents
Front Matter....Pages i-xii
Introduction....Pages 1-6
Review of Continuous Time Models....Pages 7-34
Controlled Markov Chains....Pages 35-52
Dynamic Programming Equations....Pages 53-66
The Markov Chain Approximation Method: Introduction....Pages 67-88
Construction of the Approximating Markov Chain....Pages 89-151
Computational Methods for Controlled Markov Chains....Pages 153-189
The Ergodic Cost Problem: Formulation and Algorithms....Pages 191-214
Heavy Traffic and Singular Control Problems: Examples and Markov Chain Approximations....Pages 215-244
Weak Convergence and the Characterization of Processes....Pages 245-265
Convergence Proofs....Pages 267-299
Convergence for Reflecting Boundaries, Singular Control and Ergodic Cost Problems....Pages 301-323
Finite Time Problems and Nonlinear Filtering....Pages 325-345
Controlled Variance and Jumps....Pages 347-366
Problems from the Calculus of Variations: Finite Time Horizon....Pages 367-400
Problems from the Calculus of Variations: Infinite Time Horizon....Pages 401-442
The Viscosity Solution Approach to Proving Convergence of Numerical Schemes....Pages 443-453
Back Matter....Pages 455-476
β¦ Subjects
Probability Theory and Stochastic Processes; Calculus of Variations and Optimal Control; Optimization; Systems Theory, Control
π SIMILAR VOLUMES
xii, 475 p. : 25 cm
<p>This work presents recent mathematical methods in the area of optimal control with a particular emphasis on the computational aspects and applications. Optimal control theory concerns the determination of control strategies for complex dynamical systems, in order to optimize some measure of their
<p><P>This book provides a comprehensive introduction to stochastic control problems in discrete and continuous time. The material is presented logically, beginning with the discrete-time case before proceeding to the stochastic continuous-time models. Central themes are dynamic programming in discr