𝔖 Scriptorium
✦   LIBER   ✦

📁

Machine Learning: The Basics (Machine Learning: Foundations, Methodologies, and Applications)

✍ Scribed by Alexander Jung


Publisher
Springer
Year
2022
Tongue
English
Leaves
225
Edition
1st ed. 2022
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


Machine learning (ML) has become a commonplace element in our everyday lives and a standard tool for many fields of science and engineering. To make optimal use of ML, it is essential to understand its underlying principles. 

This book approaches ML as the computational implementation of the scientific principle. This principle consists of continuously adapting a model of a given data-generating phenomenon by minimizing some form of loss incurred by its predictions. 

The book trains readers to break down various ML applications and methods in terms of data, model, and loss, thus helping them to choose from the vast range of ready-made ML methods.

The book’s three-component approach to ML provides uniform coverage of a wide range of concepts and techniques. As a case in point, techniques for regularization, privacy-preservation as well as explainability amount to specific design choices for the model, data, and loss of a ML method. 

✦ Table of Contents


Preface
Acknowledgements
Contents
Symbols
Sets
Matrices and Vectors
Machine Learning
References
1 Introduction
1.1 Relation to Other Fields
1.1.1 Linear Algebra
1.1.2 Optimization
1.1.3 Theoretical Computer Science
1.1.4 Information Theory
1.1.5 Probability Theory and Statistics
1.1.6 Artificial Intelligence
1.2 Flavours of Machine Learning
1.2.1 Supervised Learning
1.2.2 Unsupervised Learning
1.2.3 Reinforcement Learning
1.3 Organization of this Book
References
2 Components of ML
2.1 The Data
2.1.1 Features
2.1.2 Labels
2.1.3 Scatterplot
2.1.4 Probabilistic Models for Data
2.2 The Model
2.2.1 Parametrized hypospaces
2.2.2 The Size of a Hypothesis Space
2.3 The Loss
2.3.1 Loss Functions for Numeric Labels
2.3.2 Loss Functions for Categorical Labels
2.3.3 Loss Functions for Ordinal Label Values
2.3.4 Empirical Risk
2.3.5 Regret
2.3.6 Rewards as Partial Feedback
2.4 Putting Together the Pieces
2.5 Exercises
References
3 The Landscape of ML
3.1 Linear Regression
3.2 Polynomial Regression
3.3 Least Absolute Deviation Regression
3.4 The Lasso
3.5 Gaussian Basis Regression
3.6 Logistic Regression
3.7 Support Vector Machines
3.8 Bayes Classifier
3.9 Kernel Methods
3.10 Decision Trees
3.11 Deep Learning
3.12 Maximum Likelihood
3.13 Nearest Neighbour Methods
3.14 Deep Reinforcement Learning
3.15 LinUCB
3.16 Exercises
References
4 Empirical Risk Minimization
4.1 The Basic Idea of Empirical Risk Minimization
4.2 Computational and Statistical Aspects of ERM
4.3 ERM for Linear Regression
4.4 ERM for Decision Trees
4.5 ERM for Bayes Classifiers
4.6 Training and Inference Periods
4.7 Online Learning
4.8 Exercise
References
5 Gradient-Based Learning
5.1 The GD Step
5.2 Choosing Step Size
5.3 When to Stop?
5.4 GD for Linear Regression
5.5 GD for Logistic Regression
5.6 Data Normalization
5.7 Stochastic GD
5.8 Exercises
References
6 Model Validation and Selection
6.1 Overfitting
6.2 Validation
6.2.1 The Size of the Validation Set
6.2.2 k-Fold Cross Validation
6.2.3 Imbalanced Data
6.3 Model Selection
6.4 A Probabilistic Analysis of Generalization
6.5 The Bootstrap
6.6 Diagnosing ML
6.7 Exercises
References
7 Regularization
7.1 Structural Risk Minimization
7.2 Robustness
7.3 Data Augmentation
7.4 Statistical and Computational Aspects of Regularization
7.5 Semi-Supervised Learning
7.6 Multitask Learning
7.7 Transfer Learning
7.8 Exercises
References
8 Clustering
8.1 Hard Clustering with k-Means
8.2 Soft Clustering with Gaussian Mixture Models
8.3 Connectivity-Based Clustering
8.4 Clustering as Preprocessing
8.5 Exercises
References
9 Feature Learning
9.1 Basic Principle of Dimensionality Reduction
9.2 Principal Component Analysis
9.2.1 Combining PCA with Linear Regression
9.2.2 How to Choose Number of PC?
9.2.3 Data Visualisation
9.2.4 Extensions of PCA
9.3 Feature Learning for Non-numeric Data
9.4 Feature Learning for Labeled Data
9.5 Privacy-Preserving Feature Learning
9.6 Random Projections
9.7 Dimensionality Increase
9.8 Exercises
References
10 Transparent and Explainable ML
10.1 A Model Agnostic Method
10.1.1 Probabilistic Data Model for XML
10.1.2 Computing Optimal Explanations
10.2 Explainable Empirical Risk Minimization
10.3 Exercises
References
Appendix Glossary
References
Index


📜 SIMILAR VOLUMES


Machine Learning: The Basics (Machine Le
✍ Alexander Jung 📂 Library 📅 2022 🏛 Springer 🌐 English

<span><div>Machine learning (ML) has become a commonplace element in our everyday lives and a standard tool for many fields of science and engineering. To make optimal use of ML, it is essential to understand its underlying principles. </div><div><br></div><div>This book approaches ML as the computa

Artificial Intelligence in Business Mana
✍ Teik Toe Teoh; Yu Jin Goh 📂 Library 📅 2023 🏛 Springer Nature Singapore 🌐 English

Artificial intelligence (AI) is rapidly gaining significance in the business world. With more and more organizations adopt AI technologies, there is a growing demand for business leaders, managers, and practitioners who can harness AI’s potential to improve operations, increase efficiency, and drive