Manning Early Access Program Regularization in Deep Learning Version 3
β Scribed by Peng Liu
- Publisher
- Manning Publications
- Year
- 2022
- Tongue
- English
- Leaves
- 177
- Edition
- MEAP Edition
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Table of Contents
Regularization in Deep Learning MEAP V03
Copyright
Welcome letter
Brief contents
Chapter 1: Introducing regularization
1.1 Why do we need regularization?
1.2 Curse of dimensionality
1.3 Understanding underfitting and overfitting
1.4 Understanding bias-variance trade-off
1.5 More on the model training path
1.6 Understanding the model training process
1.7 The many faces of regularization
1.8 Summary
Chapter 2: Generalization: a classical view
2.1 The data
2.1.1 Sampling from the underlying data distribution
2.1.2 The train-test split
2.2 The model
2.2.1 The prediction function
2.2.2 The bias trick
2.2.3 Implementing the prediction function
2.3 The cost function
2.3.1 Expressing the cost function with linear algebra
2.4 The optimization algorithm
2.4.1 The multiple minima
2.4.2 The closed-form solution of linear regression
2.4.3 The gradient descent algorithm
2.4.4 Different types of gradient descent
2.4.5 The stochastic gradient descent algorithm
2.4.6 The impact of the learning rate
2.5 Improving the predictive performance
2.5.1 Augmented representation via feature engineering
2.5.2 Quadratic basis function
2.6 Empirical risk minimization
2.6.1 More on the model
2.6.2 Bias and variance decomposition
2.6.3 Understanding bias and variance using bootstrap
2.6.4 Reduced generalization with high model complexity
2.7 Summary
Chapter 3: generalization: a modern View
3.1 A modern view on generalization
3.1.1 Beyond perfect interpolation
3.1.2 Behind the βdouble descentβ phenomenon
3.1.3 Extending the βdouble descentβ phenomenon
3.2 Double Descent in Polynomial Regression
3.2.1 Smoothing spline
3.2.2 Rewriting the smoothing spline cost function
3.2.3 Deriving the closed-form solution
3.2.4 Implementing the smoothing spline model
3.2.5 Sample non-monotonicity
3.3 Summary
Chapter 4: Fundamentals of training deep neural networks
4.1 Multilayer perceptron
4.1.1 A two-layer neural network
4.1.2 Shallow versus deep neural network
4.2 Automatic differentiation
4.2.1 Gradient-based optimization
4.2.2 The chain rule with partial derivatives
4.2.3 Different modes of multiplication
4.3 Training a simple CNN using MNIST
4.3.1 Download and loading MNIST
4.3.2 Defining the prediction function
4.3.3 Define the cost function
4.3.4 Define the optimization procedure
4.3.5 Update the weights via iterative training
4.4 More on generalization
4.4.1 Multiple global minima
4.4.2 Best versus worst global minimum
4.5 Summary
Chapter 5: Regularization via data
5.1 Data-based methods
5.1.1 Data augmentation
5.1.2 Label smoothing
5.2 Training deep neural networks using data augmentation
5.2.1 Training without data augmentation
LeNet
5.2.2 Training with data augmentation
5.3 The deep bootstrap framework
5.3.1 Insufficiency of classical generalization framework
5.3.2 Online optimization
5.3.3 Connecting online optimization with offline generalization
5.3.4 Constructing the ideal world with CIFAR-5m
5.3.5 Model training in the ideal world
5.3.6 Model testing
5.3.7 Bootstrap error between real world and ideal world
5.3.8 Implicit bias in convolutional neural networks
5.4 Summary
π SIMILAR VOLUMES
Make your Deep Learning models more generalized and adaptable! These practical regularization techniques improve training efficiency and help avoid overfitting errors. Regularization in Deep Learning teaches you how to improve your model performance with a toolbox of regularization techniques. It