𝔖 Scriptorium
✦   LIBER   ✦

πŸ“

Adversarial Machine Learning

✍ Scribed by Yevgeniy Vorobeychik, Murat Kantarcioglu


Publisher
Springer
Year
2018
Tongue
English
Leaves
161
Series
Synthesis Lectures on Artificial Intelligence and Machine Learning
Edition
1
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicius objects they develop.

The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. This book provides a technical overview of this field. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning. We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research.

Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings.

✦ Table of Contents


Cover
Copyright Page
Title Page
Contents
List of Figures
Preface
Acknowledgments
Introduction
Machine Learning Preliminaries
Supervised Learning
Regression Learning
Classification Learning
PAC Learnability
Supervised Learning in Adversarial Settings
Unsupervised Learning
Clustering
Principal Component Analysis
Matrix Completion
Unsupervised Learning in Adversarial Settings
Reinforcement Learning
Reinforcement Learning in Adversarial Settings
Bibliographic Notes
Categories of Attacks on Machine Learning
Attack Timing
Information Available to the Attacker
Attacker Goals
Bibliographic Notes
Attacks at Decision Time
Examples of Evasion Attacks on Machine Learning Models
Attacks on Anomaly Detection: Polymorphic Blending
Attacks on PDF Malware Classifiers
Modeling Decision-Time Attacks
White-Box Decision-Time Attacks
Attacks on Binary Classifiers: Adversarial Classifier Evasion
Decision-Time Attacks on Multiclass Classifiers
Decision-Time Attacks on Anomaly Detectors
Decision-Time Attacks on Clustering Models
Decision-Time Attacks on Regression Models
Decision-Time Attacks on Reinforcement Learning
Black-Box Decision-Time Attacks
A Taxonomy of Black-Box Attacks
Modeling Attacker Information Acquisition
Attacking Using an Approximate Model
Bibliographical Notes
Defending Against Decision-Time Attacks
Hardening Supervised Learning against Decision-Time Attacks
Optimal Evasion-Robust Classification
Optimal Evasion-Robust Sparse SVM
Evasion-Robust SVM against Free-Range Attacks
Evasion-Robust SVM against Restrained Attacks
Evasion-Robust Classification on Unrestricted Feature Spaces
Robustness to Adversarially Missing Features
Approximately Hardening Classifiers against Decision-Time Attacks
Relaxation Approaches
General-Purpose Defense: Iterative Retraining
Evasion-Robustness through Feature-Level Protection
Decision Randomization
Model
Optimal Randomized Operational Use of Classification
Evasion-Robust Regression
Bibliographic Notes
Data Poisoning Attacks
Modeling Poisoning Attacks
Poisoning Attacks on Binary Classification
Label-Flipping Attacks
Poison Insertion Attack on Kernel SVM
Poisoning Attacks for Unsupervised Learning
Poisoning Attacks on Clustering
Poisoning Attacks on Anomaly Detection
Poisoning Attack on Matrix Completion
Attack Model
Attacking Alternating Minimization
Attacking Nuclear Norm Minimization
Mimicking Normal User Behaviors
A General Framework for Poisoning Attacks
Black-Box Poisoning Attacks
Bibliographic Notes
Defending Against Data Poisoning
Robust Learning through Data Sub-Sampling
Robust Learning through Outlier Removal
Robust Learning through Trimmed Optimization
Robust Matrix Factorization
Noise-Free Subspace Recovery
Dealing with Noise
Efficient Robust Subspace Recovery
An Efficient Algorithm for Trimmed Optimization Problems
Bibliographic Notes
Attacking and Defending Deep Learning
Neural Network Models
Attacks on Deep Neural Networks: Adversarial Examples
l_2-Norm Attacks
l_-Norm Attacks
l_0-Norm Attacks
Attacks in the Physical World
Black-Box Attacks
Making Deep Learning Robust to Adversarial Examples
Robust Optimization
Retraining
Distillation
Bibliographic Notes
The Road Ahead
Beyond Robust Optimization
Incomplete Information
Confidence in Predictions
Randomization
Multiple Learners
Models and Validation
Bibliography
Authors' Biographies
Index


πŸ“œ SIMILAR VOLUMES


Adversarial Machine Learning
✍ Yevgeniy Vorobeychik, Murat Kantarcioglu πŸ“‚ Library πŸ“… 2018 πŸ› Morgan & Claypool 🌐 English

The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompa

Adversarial Machine Learning
✍ Anthony D. Joseph, Blaine Nelson, Benjamin I. P. Rubinstein, J. D. Tygar πŸ“‚ Library πŸ“… 2019 πŸ› Cambridge University Press 🌐 English

Written by leading researchers, this complete introduction brings together all the theory and tools needed for building robust machine learning in adversarial environments. Discover how machine learning systems can adapt when an adversary actively poisons data to manipulate statistical inference, le

Adversarial Robustness for Machine Learn
✍ Pin-Yu Chen, Cho-Jui Hsieh πŸ“‚ Library πŸ“… 2022 πŸ› Academic Press 🌐 English

<span>Adversarial Robustness for Machine Learning summarizes the recent progress on this topic and introduces popular algorithms on adversarial attack, defense and verification. Sections cover adversarial attack, verification and defense, mainly focusing on image classification applications which ar

Machine Learning Algorithms: Adversarial
✍ Fuwei Li, Lifeng Lai, Shuguang Cui πŸ“‚ Library πŸ“… 2022 πŸ› Springer 🌐 English

<p><span>This book demonstratesΒ the optimal adversarial attacks against several important signal processing algorithms.Β Through presenting the optimal attacks in wireless sensor networks, array signal processing, principal component analysis, etc, the authors reveal the robustness of the signal proc