𝔖 Scriptorium
✦   LIBER   ✦

📁

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

✍ Scribed by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller


Publisher
Springer International Publishing
Year
2019
Tongue
English
Leaves
435
Series
Lecture Notes in Computer Science 11700
Edition
1st ed. 2019
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.

The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.

✦ Table of Contents


Front Matter ....Pages i-xi
Front Matter ....Pages 1-3
Towards Explainable Artificial Intelligence (Wojciech Samek, Klaus-Robert Müller)....Pages 5-22
Transparency: Motivations and Challenges (Adrian Weller)....Pages 23-40
Interpretability in Intelligent Systems – A New Concept? (Lars Kai Hansen, Laura Rieger)....Pages 41-49
Front Matter ....Pages 51-53
Understanding Neural Networks via Feature Visualization: A Survey (Anh Nguyen, Jason Yosinski, Jeff Clune)....Pages 55-76
Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation (Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee)....Pages 77-95
Unsupervised Discrete Representation Learning (Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama)....Pages 97-119
Towards Reverse-Engineering Black-Box Neural Networks (Seong Joon Oh, Bernt Schiele, Mario Fritz)....Pages 121-144
Front Matter ....Pages 145-147
Explanations for Attributing Deep Neural Network Predictions (Ruth Fong, Andrea Vedaldi)....Pages 149-167
Gradient-Based Attribution Methods (Marco Ancona, Enea Ceolini, Cengiz Öztireli, Markus Gross)....Pages 169-191
Layer-Wise Relevance Propagation: An Overview (Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller)....Pages 193-209
Explaining and Interpreting LSTMs (Leila Arras, José Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller et al.)....Pages 211-238
Front Matter ....Pages 239-241
Comparing the Interpretability of Deep Networks via Network Dissection (Bolei Zhou, David Bau, Aude Oliva, Antonio Torralba)....Pages 243-252
Gradient-Based Vs. Propagation-Based Explanations: An Axiomatic Comparison (Grégoire Montavon)....Pages 253-265
The (Un)reliability of Saliency Methods (Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne et al.)....Pages 267-280
Front Matter ....Pages 281-284
Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation (Markus Hofmarcher, Thomas Unterthiner, José Arjona-Medina, Günter Klambauer, Sepp Hochreiter, Bernhard Nessler)....Pages 285-296
Understanding Patch-Based Learning of Video Data by Explaining Predictions (Christopher J. Anders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller)....Pages 297-309
Quantum-Chemical Insights from Interpretable Atomistic Neural Networks (Kristof T. Schütt, Michael Gastegger, Alexandre Tkatchenko, Klaus-Robert Müller)....Pages 311-330
Interpretable Deep Learning in Drug Discovery (Kristina Preuer, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, Thomas Unterthiner)....Pages 331-345
NeuralHydrology – Interpreting LSTMs in Hydrology (Frederik Kratzert, Mathew Herrnegger, Daniel Klotz, Sepp Hochreiter, Günter Klambauer)....Pages 347-362
Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI (Pamela K. Douglas, Ariana Anderson)....Pages 363-378
Current Advances in Neural Decoding (Marcel A. J. van Gerven, Katja Seeliger, Umut Güçlü, Yağmur Güçlütürk)....Pages 379-394
Front Matter ....Pages 395-397
Software and Application Patterns for Explanation Methods (Maximilian Alber)....Pages 399-433
Back Matter ....Pages 435-439

✦ Subjects


Computer Science; Image Processing and Computer Vision; Computing Milieux; Computer Systems Organization and Communication Networks


📜 SIMILAR VOLUMES


Explainable AI: Interpreting, Explaining
✍ Wojciech Samek; Grégoire Montavon; Andrea Vedaldi; Lars Kai Hansen; Klaus-Robert 📂 Library 📅 2019 🏛 Springer Nature 🌐 English

The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines

Explainable Deep Learning AI: Methods an
✍ Jenny Benois-Pineau, Romain Bourqui, Dragutin Petkovi 📂 Library 📅 2023 🏛 Academic Press 🌐 English

Explainable Deep Learning AI: Methods and Challenges presents the latest works of leading researchers in the XAI area, offering an overview of the XAI area, along with several novel technical methods and applications that address explainability challenges for Deep Learning AI systems. The book overv

Explainable Deep Learning AI: Methods an
✍ Jenny Benois-Pineau; Romain Bourqui; Dragutin Petkovic; Georges Quenot 📂 Library 📅 2023 🏛 Elsevier 🌐 English

Explainable Deep Learning AI: Methods and Challenges presents the latest works of leading researchers in the XAI area, offering an overview of the XAI area, along with several novel technical methods and applications that address explainability challenges for deep learning AI systems. The book overv

Explaining Explanation
✍ David-Hillel Ruben 📂 Library 📅 1992 🏛 Routledge 🌐 English

In Explaining Explanation, David-Hillel Ruben provides a non-technical discussion of some of the main historical attempts to explain the concept of explanation, examining the works of Plato, Aristotle, John Stuart Mill, and Carl Hempel. Building on and developing the insights of these historical fig

Explaining explanation
✍ Ruben, David-Hillel 📂 Library 📅 2016 🏛 Routledge 🌐 English

I. Getting our bearings -- II. Plato on explanation -- III. Aristotle on explanation -- IV. Mill and Hempel on explanation -- V. The ontology of explanation -- VI. Arguments, laws, and explanation -- VII. A realist theory of explanation.

Explaining explanation
✍ Ruben, David-Hillel 📂 Library 📅 2004 🏛 Routledge, Taylor & Francis e-Library 🌐 English