๐”– Scriptorium
โœฆ   LIBER   โœฆ

๐Ÿ“

Responsible Graph Neural Networks

โœ Scribed by Mohamed Abdel-Basset, Nour Moustafa, Hossam Hawash, Zahir Tari


Publisher
Chapman and Hall/CRC
Tongue
English
Leaves
324
Category
Library

โฌ‡  Acquire This Volume

No coin nor oath required. For personal study only.

โœฆ Synopsis


More frequent and complex cyber threats require robust, automated, and rapid responses from cyber-security specialists. This book offers a complete study in the area of graph learning in cyber, emphasizing graph neural networks (GNNs) and their cyber-security applications.

Three parts examine the basics, methods and practices, and advanced topics. The first part presents a grounding in graph data structures and graph embedding and gives a taxonomic view of GNNs and cyber-security applications. The second part explains three different categories of graph learning, including deterministic, generative, and reinforcement learning and how they can be used for developing cyber defense models. The discussion of each category covers the applicability of simple and complex graphs, scalability, representative algorithms, and technical details.

Undergraduate students, graduate students, researchers, cyber analysts, and AI engineers looking to understand practical deep learning methods will find this book an invaluable resource.

โœฆ Table of Contents


Cover
Half Title
Title Page
Copyright Page
Dedication
Contents
Preface
How to Use This Book
1. Introduction to Graph Intelligence
1.1. Introduction
1.2. Feedforward Neural Network (FFNN)
1.2.1. Architecture
1.2.2. Activation Functions
1.2.2.1. Binary Step Function
1.2.2.2. Linear Activation Function
1.2.2.3. Nonlinear Activation Functions
1.3. Convolutional Neural Networks (CNNs)
1.3.1. Convolutional Layer
1.3.2. Pooling Layers
1.3.2.1. Max Pooling
1.3.2.2. Average Pooling
1.4. Recurrent Neural Networks (RNNs)
1.4.1. Vanilla RNN
1.4.2. Long Short-Term Memory (LSTM)
1.4.3. Gated Recurrent Units (GRUs)
1.4.4. Bidirectional Recurrent Neural Network (Bi-RNN)
1.5. Autoencoder
1.6. Deep Learning for Graph Intelligence
1.7. What Is Covered in This Book?
1.8. Case Study
References
2. Fundamentals of Graph Representations
2.1. Introduction
2.2. Graph Representation
2.3. Properties and Measure
2.3.1. Degree
2.3.2. Connectivity
2.3.3. Centrality
2.3.3.1. Degree Centrality
2.3.3.2. Eigenvector Centrality
2.3.3.3. Katz Centrality
2.3.3.4. Betweenness Centrality
2.3.3.5. Closeness Centrality
2.3.3.6. Harmonic Centrality
2.4. Spectral Graph Analysis
2.4.1. Laplacian Matrix
2.4.2. Graph Laplacian Matrix: On Eigenvalues
2.5. Graph Signal Analysis
2.5.1. Graph Fourier Transform
2.6. Complex Graphs
2.6.1. Bipartite Graphs
2.6.2. Heterogeneous Graphs
2.6.3. Multi-dimensional Graphs
2.6.4. Signed Graphs
2.6.5. Hypergraphs
2.6.6. Dynamic Graphs
2.7. Graph Intelligence Tasks
2.7.1. Graph-Oriented Tasks
2.7.1.1. Graph Classification
2.7.2. Node-Oriented Tasks
2.7.2.1. Node Classification
2.7.2.2. Link Prediction
2.8. Case Study
References
3. Graph Embedding: Methods, Taxonomies, and Applications
3.1. Introduction
3.2. Homogeneous Graph Embedding
3.2.1. Node Co-occurrence
3.2.2. Node State
3.2.3. Community Topology
3.3. Heterogeneous Graph Embedding
3.3.1. Application-Based Heterogeneous Graph Embedding
3.3.2. Feature-Based Heterogeneous Graph Embedding
3.3.3. Topology-Retained Heterogeneous Graph Embedding
3.3.3.1. Edge-Based Embedding
3.3.3.2. Path-Based Embedding
3.3.3.3. Subgraph Embedding
3.3.4. Dynamic Heterogeneous Graph Embedding
3.4. Bipartite Graph Embedding
3.5. Case Study
References
4. Toward Graph Neural Networks: Essentials and Pillars
4.1. Introduction
4.2. Graph Filters
4.2.1. Spatial-Dependent Graph Filters
4.2.2. Spectral-Dependent Graph Filters
4.3. Graph Normalization
4.3.1. Batch Normalization
4.3.2. Instance Normalization
4.3.3. Layer Normalization
4.3.4. Graph Normalization
4.3.5. Graph Size Normalization
4.3.6. Pair Normalization
4.3.7. Mean Subtraction Normalization
4.3.8. Message Normalization
4.3.9. Differentiable Group Normalization
4.4. Graph Pooling
4.4.1. Global Add Pooling
4.4.2. Global Mean Pooling
4.4.3. Global Max Pooling
4.4.4. topk Pooling
4.4.5. Self-Attention (SA) Graph Pooling
4.4.6. Sort Pooling
4.4.7. Edge Pooling
4.4.8. Adaptive Structure Aware Pooling (ASAP)
4.4.9. PAN Pooling
4.4.10. Memory Pooling
4.4.11. Differentiable Pooling
4.4.12. MinCut Pooling
4.4.13. Spectral Modularity Pooling
4.5. Graph Aggregation
4.5.1. Sum Aggregation
4.5.2. Mean Aggregation
4.5.3. Max Aggregation
4.5.4. Min Aggregation
4.5.5. Multiple Aggregation
4.5.6. Variance Aggregation
4.5.7. Standard Deviation (STD) Aggregation
4.5.8. SoftMax Aggregation
4.5.9. Power Mean Aggregation
4.5.10. Long Short-term Memory (LSTM) Aggregation
4.5.11. Set2Set
4.5.12. Degree Scaler Aggregation
4.5.13. Graph Multiset Transformer
4.5.14. Attentional Aggregation
4.6. Case Study
References
5. Graph Convolution Networks: A Journey from Start to End
5.1. Introduction
5.2. Graph Convolutional Network
5.3. Deeper Graph Convolution Network
5.4. GCN with Initial Residual and Identity Mapping (GCNII)
5.5. Topology Adaptive Graph Convolutional Networks
5.6. Relational Graph Convolutional Network
5.7. Case Study
References
6. Graph Attention Networks: A Journey from Start to End
6.1. Introduction
6.2. Graph Attention Network
6.3. Graph Attention Network version 2(GATv2)
6.4. Generalized Graph Transformer Network
6.5. Graph Transformer Network (GTN)
6.6. Case Study
References
7. Recurrent Graph Neural Networks: A Journey from Start to End
7.1. Introduction
7.2. Tree-Long Short-Term Memory
7.2.1. Child-Sum Tree-LSTMs
7.2.2. N-ary Tree-LSTMs
7.3. Gated Graph Sequence Neural Networks
7.3.1. Graph Classification
7.3.2. Node Section
7.3.3. Sequence Outputs
7.4. Graph-Gated Recurrent Units
7.5. Case Study
References
8. Graph Autoencoders: A Journey from Start to End
8.1. Introduction
8.2. General Framework of Graph Autoencoders
8.3. Variational Graph Autoencoder
8.4. Regularized Variational Graph Autoencoder
8.5. Graphite Variational Autoencoder
8.6. Dirichlet Graph Variational Autoencoder (DGVAE)
8.7. Case Study
References
9. Interpretable Graph Intelligence: A Journey from Black to White Box
9.1. Introduction
9.2. Interpretability Methods for Graph Intelligence
9.3. Instance-Level Interpretability
9.3.1. Gradients-Dependent Explanations
9.3.1.1. Conceptual View
9.3.1.2. Methods
9.3.2. Perturbation-Dependent Explanation Methods
9.3.2.1. Conceptual View
9.3.2.2.. Explanation Methods
9.3.3. Surrogate Models
9.3.3.1. A Conceptual View
9.3.3.2. Surrogate Interpretability Methods
9.3.4. Decomposition Explanation
9.3.4.1. Conceptual View
9.3.4.2. Decomposition Methods
9.4. Model-Level Explanations
9.5. Interpretability Evaluation Metrics
9.5.1. Fidelity Measure
9.5.2. Sparsity Measure
9.5.3. Stability Measure
9.6. Case Study
References
10. Toward Privacy Preserved Graph Intelligence: Concepts, Methods, and Applications
10.1. Introduction
10.2. Privacy Threats for Graph Intelligence
10.3. Threat Models of Privacy Attacks
10.3.1. Methods of Privacy Attack on GNNs
10.4. Differential Privacy for Graph Intelligence
10.5. Federated Graph Intelligence
10.5.1. Horizontal FL
10.5.2. Vertical FL
10.5.3. Federated Transfer Learning (FTL)
10.6. Open-Source Frameworks
10.6.1. TensorFlow Federated
10.6.2. FedML
10.6.3. Federated AI Technology Enabler (FATE)
10.6.4. IBM Federated Learning
10.6.5. Flower
10.6.6. Leaf
10.6.7. NVIDIA Federated Learning Application Runtime Environment (NVIDIA FLARE)
10.6.8. OpenFL
10.6.9. PaddleFL
10.6.10. PySyft and PyGrid
10.6.11. Sherpa.ai
10.7. Case Study
References
Index


๐Ÿ“œ SIMILAR VOLUMES


Responsible Graph Neural Networks
โœ Mohamed Abdel-Basset, Nour Moustafa, Hossam Hawash, Zahir Tari ๐Ÿ“‚ Library ๐Ÿ“… 2023 ๐Ÿ› CRC Press ๐ŸŒ English

More frequent and complex cyber threats require robust, automated, and rapid responses from cyber-security specialists. This book offers a complete study in the area of graph learning in cyber, emphasizing graph neural networks (GNNs) and their cyber-security applications. Three parts examine the

Advances in Graph Neural Networks
โœ Chuan Shi, Xiao Wang, Cheng Yang ๐Ÿ“‚ Library ๐Ÿ“… 2022 ๐Ÿ› Springer ๐ŸŒ English

<span>This book provides a comprehensive introduction to the foundations and frontiers of graph neural networks. In addition, the book introduces the basic concepts and definitions in graph representation learning and discusses the development of advanced graph representation learning methods with a

Introduction to Graph Neural Networks
โœ Zhiyuan Liu, Jie Zhou ๐Ÿ“‚ Library ๐Ÿ“… 2020 ๐Ÿ› Morgan & Claypool ๐ŸŒ English

<p><b>Graphs are useful data structures in complex real-life applications such as modeling physical systems, learning molecular fingerprints, controlling traffic networks, and recommending friends in social networks.</b> However, these tasks require dealing with non-Euclidean graph data that contain

Introduction to Graph Neural Networks
โœ Zhiyuan Liu, Jie Zhou ๐Ÿ“‚ Library ๐Ÿ“… 2020 ๐Ÿ› Springer ๐ŸŒ English

<p><span>Graphs are useful data structures in complex real-life applications such as modeling physical systems, learning molecular fingerprints, controlling traffic networks, and recommending friends in social networks. However, these tasks require dealing with non-Euclidean graph data that contains

Graph Neural Networks: Foundations, Fron
โœ Lingfei Wu, Peng Cui, Jian Pei, Liang Zhao ๐Ÿ“‚ Library ๐Ÿ“… 2022 ๐Ÿ› Springer ๐ŸŒ English

<p>Deep Learning models are at the core of artificial intelligence research today. It is well known that deep learning techniques are disruptive for Euclidean data, such as images or sequence data, and not immediately applicable to graph-structured data such as text. This gap has driven a wave of re

Concepts and Techniques of Graph Neural
โœ Vinod Kumar, Dharmendra Singh Rajput ๐Ÿ“‚ Library ๐Ÿ“… 2023 ๐Ÿ› Engineering Science Reference ๐ŸŒ English

"This book will aim to provide stepwise discussion; exhaustive literature review; detailed analysis and discussion; rigorous experimentation results, application-oriented approach that will be demonstrated with respect to applications of Graph Neural Network (GNN). It will be written to develop the