๐”– Scriptorium
โœฆ   LIBER   โœฆ

๐Ÿ“

Handbook of Learning and Approximate Dynamic Programming

โœ Scribed by Jennie Si, Andy Barto, Warren Powell, Donald Wunsch(auth.)


Publisher
Wiley-IEEE Press
Year
2004
Tongue
English
Leaves
651
Category
Library

โฌ‡  Acquire This Volume

No coin nor oath required. For personal study only.

โœฆ Synopsis


  • A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code
  • Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book
  • Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented
  • The contributors are leading researchers in the field

Content:
Chapter 1 ADP: Goals, Opportunities and Principles (pages 3โ€“44): Paul Werbos
Chapter 2 Reinforcement Learning and Its Relationship to Supervised Learning (pages 45โ€“63): Andrew G. Barto and Thomas G. Dietterich
Chapter 3 Model?Based Adaptive Critic Designs (pages 65โ€“95): Silvia Ferrari and Robert F. Stengel
Chapter 4 Guidance in the Use of Adaptive Critics for Control (pages 97โ€“124): George G. Lendaris and James C. Neidhoefer
Chapter 5 Direct Neural Dynamic Programming (pages 125โ€“151): Jennie Si, Lei Yang and Derong Liu
Chapter 6 The Linear Programming Approach to Approximate Dynamic Programming (pages 153โ€“178): Daniela Pucci de Farias
Chapter 7 Reinforcement Learning in Large, High?Dimensional State Spaces (pages 179โ€“202): Greg Grudic and Lyle Ungar
Chapter 8 Hierarchical Decision Making (pages 203โ€“232): Malcolm Ryan
Chapter 9 Improved Temporal Difference Methods with Linear Function Approximation (pages 233โ€“259): Dimitri P. Bertsekas, Vivek S. Borkar and Angelia Nedich
Chapter 10 Approximate Dynamic Programming for High?Dimensional Resource Allocation Problems (pages 261โ€“283): Warren B. Powell and Benjamin Van Roy
Chapter 11 Hierarchical Approaches to Concurrency, Multiagency, and Partial Observability (pages 285โ€“310): Sridhar Mahadevan, Mohammad Ghavamzadeh, Khashayar Rohanimanesh and Georgias Theocharous
Chapter 12 Learning and Optimization โ€” From a System Theoretic Perspective (pages 311โ€“335): Xi?Ren Cao
Chapter 13 Robust Reinforcement Learning Using Integral?Quadratic Constraints (pages 337โ€“358): Charles W. Anderson, Matt Kretchmar, Peter Young and Douglas Hittle
Chapter 14 Supervised Actor?Critic Reinforcement Learning (pages 359โ€“380): Michael T. Rosenstein and Andrew G. Barto
Chapter 15 BPTT and DAC โ€” A Common Framework for Comparison (pages 381โ€“404): Danil V. Prokhorov
Chapter 16 Near?Optimal Control Via Reinforcement Learning and Hybridization (pages 405โ€“432): Augustine O. Esogbue and Warren E. Hearnes
Chapter 17 Multiobjective Control Problems by Reinforcement Learning (pages 433โ€“461): Dong?Oh Kang and Zeungnam Bien
Chapter 18 Adaptive Critic Based Neural Network for Control?Constrained Agile Missile (pages 463โ€“478): S. N. Balakrishnan and Dongchen Han
Chapter 19 Applications of Approximate Dynamic Programming in Power Systems Control (pages 479โ€“515): Ganesh K Venayagamoorthy, Donald C Wunsch and Ronald G Harley
Chapter 20 Robust Reinforcement Learning for Heating, Ventilation, and Air Conditioning Control of Buildings (pages 517โ€“534): Charles W. Anderson, Douglas Hittle, Matt Kretchmar and Peter Young
Chapter 21 Helicopter Flight Control Using Direct Neural Dynamic Programming (pages 535โ€“559): Russell Enns and Jennie Si
Chapter 22 Toward Dynamic Stochastic Optimal Power Flow (pages 561โ€“598): James A. Momoh
Chapter 23 Control, Optimization, Security, and Self?healing of Benchmark Power Systems (pages 599โ€“634): James A. Momoh and Edwin Zivi


๐Ÿ“œ SIMILAR VOLUMES


Reinforcement Learning and Approximate D
โœ Frank L. Lewis, Derong Liu ๐Ÿ“‚ Library ๐Ÿ“… 2012 ๐Ÿ› Wiley-IEEE Press ๐ŸŒ English

Reinforcement learning (RL) and adaptive dynamic programming (ADP) has been one of the most critical research fields in science and engineering for modern complex systems. This book describes the latest RL and ADP techniques for decision and control in human engineered systems, covering both single

Reinforcement Learning and Dynamic Progr
โœ Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst ๐Ÿ“‚ Library ๐Ÿ“… 2010 ๐ŸŒ English

From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dyn

Approximate Dynamic Programming for Dyna
โœ Marlin Wolf Ulmer (auth.) ๐Ÿ“‚ Library ๐Ÿ“… 2017 ๐Ÿ› Springer International Publishing ๐ŸŒ English

<p>This book provides a straightforward overview for every researcher interested in stochastic dynamic vehicle routing problems (SDVRPs). The book is written for both the applied researcher looking for suitable solution approaches for particular problems as well as for the theoretical researcher loo

Approximate Dynamic Programming: Solving
โœ Warren B. Powell ๐Ÿ“‚ Library ๐Ÿ“… 2007 ๐Ÿ› Wiley-Interscience ๐ŸŒ English

A complete and accessible introduction to the real-world applications of approximate dynamic programming <p> With the growing levels of sophistication in modern-day operations, it is vital for practitioners to understand how to approach, model, and solve complex industrial problems. Approximat