Reinforcement learning (RL) and adaptive dynamic programming (ADP) has been one of the most critical research fields in science and engineering for modern complex systems. This book describes the latest RL and ADP techniques for decision and control in human engineered systems, covering both single
Handbook of Learning and Approximate Dynamic Programming
โ Scribed by Jennie Si, Andy Barto, Warren Powell, Donald Wunsch(auth.)
- Publisher
- Wiley-IEEE Press
- Year
- 2004
- Tongue
- English
- Leaves
- 651
- Category
- Library
No coin nor oath required. For personal study only.
โฆ Synopsis
- A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code
- Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book
- Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented
- The contributors are leading researchers in the field
Content:
Chapter 1 ADP: Goals, Opportunities and Principles (pages 3โ44): Paul Werbos
Chapter 2 Reinforcement Learning and Its Relationship to Supervised Learning (pages 45โ63): Andrew G. Barto and Thomas G. Dietterich
Chapter 3 Model?Based Adaptive Critic Designs (pages 65โ95): Silvia Ferrari and Robert F. Stengel
Chapter 4 Guidance in the Use of Adaptive Critics for Control (pages 97โ124): George G. Lendaris and James C. Neidhoefer
Chapter 5 Direct Neural Dynamic Programming (pages 125โ151): Jennie Si, Lei Yang and Derong Liu
Chapter 6 The Linear Programming Approach to Approximate Dynamic Programming (pages 153โ178): Daniela Pucci de Farias
Chapter 7 Reinforcement Learning in Large, High?Dimensional State Spaces (pages 179โ202): Greg Grudic and Lyle Ungar
Chapter 8 Hierarchical Decision Making (pages 203โ232): Malcolm Ryan
Chapter 9 Improved Temporal Difference Methods with Linear Function Approximation (pages 233โ259): Dimitri P. Bertsekas, Vivek S. Borkar and Angelia Nedich
Chapter 10 Approximate Dynamic Programming for High?Dimensional Resource Allocation Problems (pages 261โ283): Warren B. Powell and Benjamin Van Roy
Chapter 11 Hierarchical Approaches to Concurrency, Multiagency, and Partial Observability (pages 285โ310): Sridhar Mahadevan, Mohammad Ghavamzadeh, Khashayar Rohanimanesh and Georgias Theocharous
Chapter 12 Learning and Optimization โ From a System Theoretic Perspective (pages 311โ335): Xi?Ren Cao
Chapter 13 Robust Reinforcement Learning Using Integral?Quadratic Constraints (pages 337โ358): Charles W. Anderson, Matt Kretchmar, Peter Young and Douglas Hittle
Chapter 14 Supervised Actor?Critic Reinforcement Learning (pages 359โ380): Michael T. Rosenstein and Andrew G. Barto
Chapter 15 BPTT and DAC โ A Common Framework for Comparison (pages 381โ404): Danil V. Prokhorov
Chapter 16 Near?Optimal Control Via Reinforcement Learning and Hybridization (pages 405โ432): Augustine O. Esogbue and Warren E. Hearnes
Chapter 17 Multiobjective Control Problems by Reinforcement Learning (pages 433โ461): Dong?Oh Kang and Zeungnam Bien
Chapter 18 Adaptive Critic Based Neural Network for Control?Constrained Agile Missile (pages 463โ478): S. N. Balakrishnan and Dongchen Han
Chapter 19 Applications of Approximate Dynamic Programming in Power Systems Control (pages 479โ515): Ganesh K Venayagamoorthy, Donald C Wunsch and Ronald G Harley
Chapter 20 Robust Reinforcement Learning for Heating, Ventilation, and Air Conditioning Control of Buildings (pages 517โ534): Charles W. Anderson, Douglas Hittle, Matt Kretchmar and Peter Young
Chapter 21 Helicopter Flight Control Using Direct Neural Dynamic Programming (pages 535โ559): Russell Enns and Jennie Si
Chapter 22 Toward Dynamic Stochastic Optimal Power Flow (pages 561โ598): James A. Momoh
Chapter 23 Control, Optimization, Security, and Self?healing of Benchmark Power Systems (pages 599โ634): James A. Momoh and Edwin Zivi
๐ SIMILAR VOLUMES
From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dyn
<p>This book provides a straightforward overview for every researcher interested in stochastic dynamic vehicle routing problems (SDVRPs). The book is written for both the applied researcher looking for suitable solution approaches for particular problems as well as for the theoretical researcher loo
A complete and accessible introduction to the real-world applications of approximate dynamic programming <p> With the growing levels of sophistication in modern-day operations, it is vital for practitioners to understand how to approach, model, and solve complex industrial problems. Approximat