Explainable AI with Python
â Scribed by Leonida Gianfagn, Antonio Di Cecco
- Publisher
- Springer
- Year
- 2021
- Tongue
- English
- Leaves
- 207
- Category
- Library
No coin nor oath required. For personal study only.
⌠Table of Contents
Contents
Chapter 1: The Landscape
1.1 Examples of What Explainable AI Is
1.1.1 Learning Phase
1.1.2 Knowledge Discovery
1.1.3 Reliability and Robustness
1.1.4 What Have We Learnt from the Three Examples
1.2 Machine Learning and XAI
1.2.1 Machine Learning Tassonomy
1.2.2 Common Myths
1.3 The Need for Explainable AI
1.4 Explainability and Interpretability: Different Words to Say the Same Thing or Not?
1.4.1 From World to Humans
1.4.2 Correlation Is Not Causation
1.4.3 So What Is the Difference Between Interpretability and Explainability?
1.5 Making Machine Learning Systems Explainable
1.5.1 The XAI Flow
1.5.2 The Big Picture
1.6 Do We Really Need to Make Machine Learning Models Explainable?
1.7 Summary
References
Chapter 2: Explainable AI: Needs, Opportunities, and Challenges
2.1 Human in the Loop
2.1.1 Centaur XAI Systems
2.1.2 XAI Evaluation from âHuman in the Loop Perspectiveâ
2.2 How to Make Machine Learning Models Explainable
2.2.1 Intrinsic Explanations
2.2.2 Post Hoc Explanations
2.2.3 Global or Local Explainability
2.3 Properties of Explanations
2.4 Summary
References
Chapter 3: Intrinsic Explainable Models
3.1 Loss Function
3.2 Linear Regression
3.3 Logistic Regression
3.4 Decision Trees
3.5 K-Nearest Neighbors (KNN)
3.6 Summary
References
Chapter 4: Model-Agnostic Methods for XAI
4.1 Global Explanations: Permutation Importance and Partial Dependence Plot
4.1.1 Ranking Features by Permutation Importance
4.1.2 Permutation Importance on the Train Set
4.1.3 Partial Dependence Plot
4.1.4 Properties of Explanations
4.2 Local Explanations: XAI with Shapley Additive exPlanations
4.2.1 Shapley Values: AÂ Game Theoretical Approach
4.2.2 The First Use of SHAP
4.2.3 Properties of Explanations
4.3 The Road to KernelSHAP
4.3.1 The Shapley Formula
4.3.2 How to Calculate Shapley Values
4.3.3 Local Linear Surrogate Models (LIME)
4.3.4 KernelSHAP Is a Unique Form of LIME
4.4 KernelSHAP and Interactions
4.4.1 The New York Cab Scenario
4.4.2 Train the Model with Preliminary Analysis
4.4.3 Making the Model Explainable with KernelShap
4.4.4 Interactions of Features
4.5 A Faster SHAP for Boosted Trees
4.5.1 Using TreeShap
4.5.2 Providing Explanations
4.6 A Naïve Criticism to SHAP
4.7 Summary
References
Chapter 5: Explaining Deep Learning Models
5.1 Agnostic Approach
5.1.1 Adversarial Features
5.1.2 Augmentations
5.1.3 Occlusions as Augmentations
5.1.4 Occlusions as an Agnostic XAI Method
5.2 Neural Networks
5.2.1 The Neural Network Structure
5.2.2 Why the Neural Network Is Deep? (Versus Shallow)
5.2.3 Rectified Activations (and Batch Normalization)
5.2.4 Saliency Maps
5.3 Opening Deep Networks
5.3.1 Different Layer Explanation
5.3.2 CAM (Class Activation Maps) and Grad-CAM
5.3.3 DeepShap/DeepLift
5.4 A Critic of Saliency Methods
5.4.1 What the Network Sees
5.4.2 Explainability Batch Normalizing Layer by Layer
5.5 Unsupervised Methods
5.5.1 Unsupervised Dimensional Reduction
5.5.2 Dimensional Reduction of Convolutional Filters
5.5.3 Activation Atlases: How to Tell a Wok from a Pan
5.6 Summary
References
Chapter 6: Making Science with Machine Learning and XAI
6.1 Scientific Method in the Age of Data
6.2 Ladder of Causation
6.3 Discovering Physics Concepts with ML and XAI
6.3.1 The Magic of Autoencoders
6.3.2 Discover the Physics of Damped Pendulum with ML and XAI
6.3.3 Climbing the Ladder of Causation
6.4 Science in the Age of ML and XAI
6.5 Summary
References
Chapter 7: Adversarial Machine Learning and Explainability
7.1 Adversarial Examples (AEs): Crash Course
7.1.1 Hands-On Adversarial Examples
7.2 Doing XAI with Adversarial Examples
7.3 Defending Against Adversarial Attacks with XAI
7.4 Summary
References
Chapter 8: A Proposal for a Sustainable Model of Explainable AI
8.1 The XAI âFil Rougeâ
8.2 XAI and GDPR
8.2.1 F.A.S.T. XAI
8.3 Conclusions
8.4 Summary
References
Appendix A
âF.A.S.T. XAI Certificationâ
Index
đ SIMILAR VOLUMES
Understand how to use Explainable AI (XAI) libraries and build trust in AI and machine learning models. This book utilizes a problem-solution approach to explaining Machine Learning models and their algorithms. The book starts with model interpretation for supervised learning linear models, which in
Learn the ins and outs of decisions, biases, and reliability of AI algorithms and how to make sense of these predictions. This book explores the so-called black-box models to boost the adaptability, interpretability, and explainability of the decisions made by AI algorithms using frameworks such as
Learn the ins and outs of decisions, biases, and reliability of AI algorithms and how to make sense of these predictions. This book explores the so-called black-box models to boost the adaptability, interpretability, and explainability of the decisions made by AI algorithms using frameworks such as
Learn the ins and outs of decisions, biases, and reliability of AI algorithms and how to make sense of these predictions. This book explores the so-called black-box models to boost the adaptability, interpretability, and explainability of the decisions made by AI algorithms using frameworks such as
<p><b>Resolve the black box models in your AI applications to make them fair, trustworthy, and secure. Familiarize yourself with the basic principles and tools to deploy Explainable AI (XAI) into your apps and reporting interfaces.</b></p><h4>Key Features</h4><ul><li>Learn explainable AI tools and t