The first comprehensive guide to distributional reinforcement learning, providing a new mathematical formalism for thinking about decisions from a probabilistic perspective. Distributional reinforcement learning is a new mathematical formalism for thinking about decisions. Going beyond the common
Distributional Reinforcement Learning (Adaptive Computation and Machine Learning)
β Scribed by Marc G. Bellemare, Will Dabney, Mark Rowland
- Publisher
- The MIT Press
- Year
- 2023
- Tongue
- English
- Leaves
- 385
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
The first comprehensive guide to distributional reinforcement learning, providing a new mathematical formalism for thinking about decisions from a probabilistic perspective.
Distributional reinforcement learning is a new mathematical formalism for thinking about decisions. Going beyond the common approach to reinforcement learning and expected values, it focuses on the total reward or return obtained as a consequence of an agent's choicesβspecifically, how this return behaves from a probabilistic perspective. In this first comprehensive guide to distributional reinforcement learning, Marc G. Bellemare, Will Dabney, and Mark Rowland, who spearheaded development of the field, present its key concepts and review some of its many applications. They demonstrate its power to account for many complex, interesting phenomena that arise from interactions with one's environment.
The authors present core ideas from classical reinforcement learning to contextualize distributional topics and include mathematical proofs pertaining to major results discussed in the text. They guide the reader through a series of algorithmic and mathematical developments that, in turn, characterize, compute, estimate, and make decisions on the basis of the random return. Practitioners in disciplines as diverse as finance (risk management), computational neuroscience, computational psychiatry, psychology, macroeconomics, and robotics are already using distributional reinforcement learning, paving the way for its expanding applications in mathematical finance, engineering, and the life sciences. More than a mathematical approach, distributional reinforcement learning represents a new perspective on how intelligent agents make predictions and decisions.
β¦ Table of Contents
Contents
Preface
1. Introduction
1.1 Why distributional reinforcement learning?
1.2 An example: Kuhn poker
1.3 How Is Distributional Reinforcement Learning Different?
1.4 Intended Audience and Organization
1.5 Bibliographical Remarks
2. The Distribution of Returns
2.1 Random Variables and Their Probability Distributions
2.2 Markov Decision Processes
2.3 The Pinball Model
2.4 The Return
2.5 The Bellman Equation
2.6 Properties of the Random Trajectory
2.7 The Random-Variable Bellman Equation
2.8 From Random Variables to Probability Distributions
2.9 Alternative Notions of the Return Distribution
2.10 Technical Remarks
2.11 Bibliographical Remarks
2.12 Exercises
3. Learning the Return Distribution
3.1 The Monte Carlo Method
3.2 Incremental Learning
3.3 Temporal-Difference Learning
3.4 From Values to Probabilities
3.5 The Projection Step
3.6 Categorical Temporal-Difference Learning
3.7 Learning to Control
3.8 Further Considerations
3.9 Technical Remarks
3.10 Bibliographical Remarks
3.11 Exercises
4. Operators and Metrics
4.1 The Bellman Operator
4.2 Contraction Mappings
4.3 The distributional Bellman Operator
4.4 Wasserstein Distances for Return Functions
4.5 βp Probability Metrics and the CramΓ©r Distance
4.6 Sufficient Conditions for Contractivity
4.7 A Matter of Domain
4.8 Weak Convergence of Return Functions
4.9 Random-Variable Bellman Operators
4.10 Technical Remarks
4.11 Bibliographical Remarks
4.12 Exercises
5. Distributional Dynamic Programming
5.1 Computational Model
5.2 Representing Return-Distribution Functions
5.3 The Empirical Representation
5.4 The Normal Representation
5.5 Fixed-Size Empirical Representations
5.6 The Projection Step
5.7 Distributional Dynamic Programming
5.8 Error Due to Diffusion
5.9 Convergence of Distributional Dynamic Programming
5.10 Quality of the Distributional Approximation
5.11 Designing Distributional Dynamic Programming Algorithms
5.12 Technical Remarks
5.13 Bibliographical Remarks
5.14 Exercises
6. Incremental Algorithms
6.1 Computation and Statistical Estimation
6.2 From Operators to Incremental Algorithms
6.3 Categorical Temporal-Difference Learning
6.4 Quantile Temporal-Difference Learning
6.5 An Algorithmic Template for Theoretical Analysis
6.6 The Right Step Sizes
6.7 Overview of Convergence Analysis
6.8 Convergence of Incremental Algorithms
6.9 Convergence of Temporal-Difference Learning
6.10 Convergence of Categorical Temporal-Difference Learning
6.11 Technical Remarks
6.12 Bibliographical Remarks
6.13 Exercises
7. Control
7.1 Risk-Neutral Control
7.2 Value Iteration and Q-Learning
7.3 Distributional Value Iteration
7.4 Dynamics of Distributional Optimality Operators
7.5 Dynamics in the Presence of Multiple Optimal Policies
7.6 Risk and Risk-Sensitive Control
7.7 Challenges in Risk-Sensitive Control
7.8 Conditional Value-at-Risk
7.9 Technical Remarks
7.10 Bibliographical Remarks
7.11 Exercises
8. Statistical Functionals
8.1 Statistical Functionals
8.2 Moments
8.3 Bellman Closedness
8.4 Statistical Functional Dynamic Programming
8.5 Relationship to Distributional Dynamic Programming
8.6 Expectile Dynamic Programming
8.7 Infinite Collections of Statistical Functionals
8.8 Moment Temporal-Difference Learning
8.9 Technical Remarks
8.10 Bibliographical Remarks
8.11 Exercises
9. Linear Function Approximation
9.1 Function Approximation and Aliasing
9.2 Optimal Linear Value Function Approximations
9.3 A projected Bellman Operator for Linear Value Function Approximation
9.4 Semi-Gradient Temporal-Difference Learning
9.5 Semi-Gradient Algorithms for Distributional Reinforcement Learning
9.6 An algorithm Based on Signed Distributions
9.7 Convergence of the Signed Algorithm*
9.8 Technical Remarks
9.9 Bibliographical Remarks
9.10 Exercises
10. Deep Reinforcement Learning
10.1 Learning with a Deep Neural Network
10.2 Distributional Reinforcement Learning with Deep Neural Networks
10.3 Implicit Parameterizations
10.4 Evaluation of Deep Reinforcement Learning Agents
10.5 How Predictions Shape State Representations
10.6 Technical Remarks
10.7 Bibliographical Remarks
10.8 Exercises
11. Two Applications and a Conclusion
11.1 Multiagent Reinforcement Learning
11.2 Computational Neuroscience
11.3 Conclusion
11.4 Bibliographical Remarks
References
Index
π SIMILAR VOLUMES
I am a software developer and worked on applying Reinforcement Learning (RL) in cognitive fields for my patent work (pending). This book is highly regarded in RL literature and is probably one of the few hand counted books that explicitly address RL as a subject. The book has good balance between s
Handling inherent uncertainty and exploiting compositional structure are fundamental to understanding and designing large-scale systems. Statistical relational learning builds on ideas from probability theory and statistics to address uncertainty while incorporating tools from logic, databases and p
<p><b>An introduction to a broad range of topics in deep learning, covering mathematical and conceptual background, deep learning techniques used in industry, and research perspectives.</b></p><p>"Written by three experts in the field, <i>Deep Learning</i> is the only comprehensive book on the subje
<p><b>An introduction to a broad range of topics in deep learning, covering mathematical and conceptual background, deep learning techniques used in industry, and research perspectives.</b></p><p>"Written by three experts in the field, <i>Deep Learning</i> is the only comprehensive book on the subje