Applied Deep Learning with TensorFlow 2: Learn to Implement Advanced Deep Learning Techniques with Python
â Scribed by U. Michelucci
- Year
- 2022
- Tongue
- English
- Leaves
- 397
- Edition
- 2
- Category
- Library
No coin nor oath required. For personal study only.
⌠Table of Contents
Table of Contents
About the Author
About the Contributing Author
About the Technical Reviewer
Acknowledgments
Foreword
Introduction
Chapter 1: Optimization and Neural Networks
A Basic Understanding of Neural Networks
The Problem of Learning
A First Definition of Learning
[Advanced Section] Assumption in the Formulation
A Definition of Learning for Neural Networks
Constrained vs. Unconstrained Optimization
[Advanced Section] Reducing a Constrained Problem to an Unconstrained Optimization Problem
Absolute and Local Minima of a Function
Optimization Algorithms
Line Search and Trust Region
Steepest Descent
The Gradient Descent Algorithm
Choosing the Right Learning Rate
Variations of GD
Mini-Batch GD
Stochastic GD
How to Choose the Right Mini-Batch Size
[Advanced Section] SGD and Fractals
Exercises
Conclusion
Chapter 2: Hands-on with a Single Neuron
A Short Overview of a Neuronâs Structure
A Short Introduction to Matrix Notation
An Overview of the Most Common Activation Functions
Identity Function
Sigmoid Function
Tanh (Hyperbolic Tangent) Activation Function
ReLU (Rectified Linear Unit) Activation Function
Leaky ReLU
The Swish Activation Function
Other Activation Functions
How to Implement a Neuron in Keras
Python Implementation Tips: Loops and NumPy
Linear Regression with a Single Neuron
The Dataset for the Real-World Example
Dataset Splitting
Linear Regression Model
Keras Implementation
The Modelâs Learning Phase
Modelâs Performance Evaluation on Unseen Data
Logistic Regression with a Single Neuron
The Dataset for the Classification Problem
Dataset Splitting
The Logistic Regression Model
Keras Implementation
The Modelâs Learning Phase
The Modelâs Performance Evaluation
Conclusion
Exercises
References
Chapter 3: Feed-Forward Neural Networks
A Short Review of Networkâs Architecture and Matrix Notation
Output of Neurons
A Short Summary of Matrix Dimensions
Example: Equations for a Network with Three Layers
Hyper-Parameters in Fully Connected Networks
A Short Review of the Softmax Activation Function for Multiclass Classifications
A Brief Digression: Overfitting
A Practical Example of Overfitting
Basic Error Analysis
Implementing a Feed-Forward Neural Network in Keras
Multiclass Classification with Feed-Forward Neural Networks
The Zalando Dataset for the Real-World Example
Modifying Labels for the Softmax Function: One-Hot Encoding
The Feed-Forward Network Model
Keras Implementation
Gradient Descent Variations Performances
Comparing the Variations
Examples of Wrong Predictions
Weight Initialization
Adding Many Layers Efficiently
Advantages of Additional Hidden Layers
Comparing Different Networks
Tips for Choosing the Right Network
Estimating the Memory Requirements of Models
General Formula for the Memory Footprint
Exercises
References
Chapter 4: Regularization
Complex Networks and Overfitting
What Is Regularization
About Network Complexity
âp Norm
â2 Regularization
Theory of â2 Regularization
Keras Implementation
â1 Regularization
Theory of â1 Regularization and Keras Implementation
Are the Weights Really Going to Zero?
Dropout
Early Stopping
Additional Methods
Exercises
References
Chapter 5: Advanced Optimizers
Available Optimizers in Keras in TensorFlow 2.5
Advanced Optimizers
Exponentially Weighted Averages
Momentum
RMSProp
Adam
Comparison of the Optimizersâ Performance
Small Coding Digression
Which Optimizer Should You Use?
Chapter 6: Hyper-Parameter Tuning
Black-Box Optimization
Notes on Black-Box Functions
The Problem of Hyper-Parameter Tuning
Sample Black-Box Problem
Grid Search
Random Search
Coarse to Fine Optimization
Bayesian Optimization
Nadaraya-Watson Regression
Gaussian Process
Stationary Process
Prediction with Gaussian Processes
Acquisition Function
Upper Confidence Bound (UCB)
Example
Sampling on a Logarithmic Scale
Hyper-Parameter Tuning with the Zalando Dataset
A Quick Note about the Radial Basis Function
Exercises
References
Chapter 7: Convolutional Neural Networks
Kernels and Filters
Convolution
Examples of Convolution
Pooling
Padding
Building Blocks of a CNN
Convolutional Layers
Pooling Layers
Stacking Layers Together
An Example of a CNN
Conclusion
Exercises
References
Chapter 8: A Brief Introduction to Recurrent Neural Networks
Introduction to RNNs
Notation
The Basic Idea of RNNs
Why the Name Recurrent
Learning to Count
Conclusion
Further Readings
Chapter 9: Autoencoders
Introduction
Regularization in Autoencoders
Feed-Forward Autoencoders
Activation Function of the Output Layer
ReLU
Sigmoid
The Loss Function
Mean Square Error
Binary Cross-Entropy
The Reconstruction Error
Example: Reconstructing Handwritten Digits
Autoencoder Applications
Dimensionality Reduction
Equivalence with PCA
Classification
Classification with Latent Features
The Curse of Dimensionality: A Small Detour
Anomaly Detection
Model Stability: AÂ Short Note
Denoising Autoencoders
Beyond FFA: Autoencoders with Convolutional Layers
Implementation in Keras
Exercises
Further Readings
Chapter 10: Metric Analysis
Human-Level Performance and Bayes Error
A Short Story About Human-Level Performance
Human-Level Performance on MNIST
Bias
Metric Analysis Diagram
Training Set Overfitting
Test Set
How to Split Your Dataset
Unbalanced Class Distribution: What Can Happen
Datasets with Different Distributions
k-fold Cross Validation
Manual Metric Analysis: An Example
Exercises
References
Chapter 11: Generative Adversarial Networks (GANs)
Introduction to GANs
Training Algorithm for GANs
A Practical Example with Keras and MNIST
A Note on Training
Conditional GANs
Conclusion
Appendix A: Introduction to Keras
Some History
Understanding the Sequential Model
Understanding Keras Layers
Setting the Activation Function
Using Functional APIs
Specifying Loss Functions and Metrics
Putting It All Together and Training
Modeling evaluate() and predict ()
Using Callback Functions
Saving and Loading Models
Saving Your Weights Manually
Saving the Entire Model
Conclusion
Appendix B: Customizing Keras
Customizing Callback Classes
Example of a Custom Callback Class
Custom Training Loops
Calculating Gradients
Custom Training Loop for a Neural Network
Index
đ SIMILAR VOLUMES
Learn how to use TensorFlow 2.0 to build machine learning and deep learning models with complete examples. The book begins with introducing TensorFlow 2.0 framework and the major changes from its last release. Next, it focuses on building Supervised Machine Learning models using TensorFlow 2.0. It
Link to the GitHub Repository containing the code examples and additional material: <a target="_blank" rel="noopener nofollow" href="https://github.com/rasbt/python-machine-learning-book">https://github.com/rasbt/python-machi...</a> Many of the most innovative breakthroughs and exciting new techn
Applied machine learning with a solid foundation in theory. Revised and expanded for TensorFlow 2, GANs, and reinforcement learning. Key Features ⢠Third edition of the bestselling, widely acclaimed Python machine learning book ⢠Clear and intuitive explanations take you deep into the theory
Applied machine learning with a solid foundation in theory. Revised and expanded for TensorFlow 2, GANs, and reinforcement learning. Key Features ⢠Third edition of the bestselling, widely acclaimed Python machine learning book ⢠Clear and intuitive explanations take you deep into the theory an