<p><span>Supercharge your skills for tailoring deep-learning models and deploying them in production environments with ease and precision.</span></p><h4><span>Key Features</span></h4><ul><li><span><span>Learn how to convert a deep learning model running on notebook environments into production-ready
Production-Ready Applied Deep Learning: Learn how to construct and deploy complex models in PyTorch and TensorFlow deep-learning frameworks
β Scribed by Tomasz Palczewski, Jaejun Lee, Lenin Mookiah
- Publisher
- Packt Publishing - ebooks Account
- Year
- 2022
- Tongue
- English
- Leaves
- 322
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
Supercharge your skills for tailoring deep-learning models and deploying them in production environments with ease and precision.
Key Features
- Learn how to convert a deep learning model running on notebook environments into production-ready application supporting various deployment environments.
- Learn conversion between PyTorch and TensorFlow.
- Achieving satisfactory model performance on various deployment environments where computational powers are often limited.
Book Description
Machine learning engineers, deep learning specialists, and data engineers without extensive experience encounter various problems when moving their models to a production environment.
Developers will be able to transform models into a desired format and deploy them with a full understanding of tradeoffs and possible alternative approaches. The book provides concrete implementations and associated methodologies that are off-the-shelf allowing readers to apply the knowledge in this book right away without much difficulty.
In this book, you will learn how to construct complex models in PyTorch and TensorFlow deep-learning frameworks. You will acquire knowledge to transform your models from one framework to the other and learn how to tailor them for specific requirements that the deployment setting introduces. By the end of this book, you will fully understand how to convert a PoC-like deep learning model into a ready-to-use version that is suitable for the target production environment.
Readers will have hands-on experience with commonly used deep learning frameworks and popular web services designed for data analytics at scale. You will get to grips with our collective know-hows from deploying hundreds of AI-based services at large scale.
What you will learn
- Learn how top-tier technology companies carry out a deep learning projects.
- Data preparation, model development & deployment, monitoring & maintenance.
- Convert a proof-of-concept deep learning model into a production-ready application.
- Learn various deep learning libraries like PyTorch / PyTorch Lightning, TensorFlow with and without Keras, TensorFlow with JAX.
- Learn techniques like model pruning and quantization, model distillation & model architecture search.
- Propose the right system architecture for deploying various AI applications at large scale.
- Set up a deep learning pipeline in an efficient and effective way using various AWS services.
Who This Book Is For
Machine learning engineers, deep learning specialists, and data scientists will find this book closing the gap between the theory and the applications with detailed examples. Readers with beginner level knowledge in machine learning or software engineering would find the contents easier to follow.
Table of Contents
- Effective Planning of Deep Learning Driven Projects
- Data Preparation for Deep Learning Projects
- Developing a Powerful Deep Learning Model
- Experiments Tracking, Model Management, and Dataset Versioning
- Data Preparation on Cloud
- Efficient Model Training
- Revealing the Secret of Deep Learning Models
- Simplifying Deep Learning Model Deployment
- Scaling Deep Learning Pipeline
- Improving Inference Efficiency
- Deep Learning on Mobile Device
- Monitoring Deep Learning Endpoint in Production
- Reviewing the Completed Deep Learning Project
β¦ Table of Contents
Cover
Title Page
Copyright and credits
Contributors
Table of Contents
Preface
Part 1 β Building a Minimum Viable Product
Chapter 1: Effective Planning of Deep Learning-Driven Projects
Technical requirements
What is DL?
Understanding the role of DL in our daily lives
Overview of DL projects
Project planning
Building minimum viable products
Building fully featured products
Deployment and maintenance
Project evaluation
Planning a DL project
Defining goal and evaluation metrics
Stakeholder identification
Task organization
Resource allocation
Defining a timeline
Managing a project
Summary
Further reading
Chapter 2: Data Preparation for Deep Learning Projects
Technical requirements
Setting up notebook environments
Setting up a Python environment
Installing Anaconda
Setting up a DL project using Anaconda
Data collection, data cleaning, and data preprocessing
Collecting data
Cleaning data
Data preprocessing
Extracting features from data
Converting text using bag-of-words
Applying term frequency-inverse document frequency (TF-IDF) transformation
Creating one-hot encoding (one-of-k)
Creating ordinal encoding
Converting a colored image into a grayscale image
Performing dimensionality reduction
Applying fuzzy matching to handle similarity between strings
Performing data visualization
Performing basic visualizations using Matplotlib
Drawing statistical graphs using Seaborn
Introduction to Docker
Introduction to dockerfiles
Building a custom Docker image
Summary
Chapter 3: Developing a Powerful Deep Learning Model
Technical requirements
Going through the basic theory of DL
How does DL work?
DL model training
Components of DL frameworks
The data loading logic
The model definition
Model training logic
Implementing and training a model in PyTorch
PyTorch data loading logic
PyTorch model definition
PyTorch model training
Implementing and training a model in TF
TF data loading logic
TF model definition
TF model training
An understanding of a complex, state-of-the-art model
StyleGAN
Implementation in PyTorch
Implementation in TF
Summary
Chapter 4: Experiment Tracking, Model Management, and Dataset Versioning
Technical requirements
Overview of DL project tracking
Components of DL project tracking
Tools for DL project tracking
DL project tracking with Weights & Biases
Setting up W&B
DL project tracking with MLflow and DVC
Setting up MLflow
Setting up MLflow with DVC
Dataset versioning β beyond Weights & Biases, MLflow, and DVC
Summary
Part 2 β Building a Fully Featured Product
Chapter 5: Data Preparation in the Cloud
Technical requirements
Data processing in the cloud
Introduction to ETL
Data processing system architecture
Introduction to Apache Spark
Resilient distributed datasets and DataFrames
Loading data
Processing data using Spark operations
Processing data using user-defined functions
Exporting data
Setting up a single-node EC2 instance for ETL
Setting up an EMR cluster for ETL
Creating a Glue job for ETL
Creating a Glue Data Catalog
Setting up a Glue context
Reading data
Defining the data processing logic
Writing data
Utilizing SageMaker for ETL
Creating a SageMaker notebook
Running a Spark job through a SageMaker notebook
Running a job from a custom container through a SageMaker notebook
Comparing the ETL solutions in AWS
Summary
Chapter 6: Efficient Model Training
Technical requirements
Training a model on a single machine
Utilizing multiple devices for training in TensorFlow
Utilizing multiple devices for training in PyTorch
Training a model on a cluster
Model parallelism
Data parallelism
Training a model using SageMaker
Setting up model training for SageMaker
Training a TensorFlow model using SageMaker
Training a PyTorch model using SageMaker
Training a model in a distributed fashion using SageMaker
SageMaker with Horovod
Training a model using Horovod
Setting up a Horovod cluster
Configuring a TensorFlow training script for Horovod
Configuring a PyTorch training script for Horovod
Training a DL model on a Horovod cluster
Training a model using Ray
Setting up a Ray cluster
Training a model in a distributed fashion using Ray
Training a model using Kubeflow
Introducing Kubernetes
Setting up model training for Kubeflow
Training a TensorFlow model in a distributed fashion using Kubeflow
Training a PyTorch model in a distributed fashion using Kubeflow
Summary
Chapter 7: Revealing the Secret of Deep Learning Models
Technical requirements
Obtaining the best performing model using hyperparameter tuning
Hyperparameter tuning techniques
Hyperparameter tuning tools
Understanding the behavior of the model with Explainable AI
Permutation Feature Importance
Feature Importance
SHapley Additive exPlanations (SHAP)
Local Interpretable Model-agnostic Explanations (LIME)
Summary
Part 3 β Deployment and Maintenance
Chapter 8: Simplifying Deep Learning Model Deployment
Technical requirements
Introduction to ONNX
Running inference using ONNX Runtime
Conversion between TensorFlow and ONNX
Converting a TensorFlow model into an ONNX model
Converting an ONNX model into a TensorFlow model
Conversion between PyTorch and ONNX
Converting a PyTorch model into an ONNX model
Converting an ONNX model into a PyTorch model
Summary
Chapter 9: Scaling a Deep Learning Pipeline
Technical requirements
Inferencing using Elastic Kubernetes Service
Preparing an EKS cluster
Configuring EKS
Creating an inference endpoint using the TensorFlow model on EKS
Creating an inference endpoint using a PyTorch model on EKS
Communicating with an endpoint on EKS
Improving EKS endpoint performance using Amazon Elastic Inference
Resizing EKS cluster dynamically using autoscaling
Inferencing using SageMaker
Setting up an inference endpoint using the Model class
Setting up a TensorFlow inference endpoint
Setting up a PyTorch inference endpoint
Setting up an inference endpoint from an ONNX model
Handling prediction requests in batches using Batch Transform
Improving SageMaker endpoint performance using AWS SageMaker Neo
Improving SageMaker endpoint performance using Amazon Elastic Inference
Resizing SageMaker endpoints dynamically using autoscaling
Hosting multiple models on a single SageMaker inference endpoint
Summary
Chapter 10: Improving Inference Efficiency
Technical requirements
Network quantization β reducing the number of bits used for model parameters
Performing post-training quantization
Performing quantization-aware training
Weight sharing β reducing the number of distinct weight values
Performing weight sharing in TensorFlow
Performing weight sharing in PyTorch
Network pruning β eliminating unnecessary connections within the network
Network pruning in TensorFlow
Network pruning in PyTorch
Knowledge distillation β obtaining a smaller network by mimicking the prediction
Network Architecture Search β finding the most efficient network architecture
Summary
Chapter 11: Deep Learning on Mobile Devices
Preparing DL models for mobile devices
Generating a TF Lite model
Generating a TorchScript model
Creating iOS apps with a DL model
Running TF Lite model inference on iOS
Running TorchScript model inference on iOS
Creating Android apps with a DL model
Running TF Lite model inference on Android
Running TorchScript model inference on Android
Summary
Chapter 12: Monitoring Deep Learning Endpoints in Production
Technical requirements
Introduction to DL endpoint monitoring in production
Exploring tools for monitoring
Exploring tools for alerting
Monitoring using CloudWatch
Monitoring a SageMaker endpoint using CloudWatch
Monitoring a model throughout the training process in SageMaker
Monitoring a live inference endpoint from SageMaker
Monitoring an EKS endpoint using CloudWatch
Summary
Chapter 13: Reviewing the Completed Deep Learning Project
Reviewing a DL project
Conducting a post-implementation review
Understanding the true value of the project
Gathering the reusable knowledge, concepts, and artifacts for future projects
Summary
Index
Other Books You May Enjoy
π SIMILAR VOLUMES
Take the next steps toward mastering deep learning, the machine learning method thatβs transforming the world around us by the second. In this practical book, youβll get up to speed on key ideas using Facebookβs open source PyTorch framework and gain the latest skills you need to create your very ow
Take the next steps toward mastering deep learning, the machine learning method thatβs transforming the world around us by the second. In this practical book, youβll get up to speed on key ideas using Facebookβs open source PyTorch framework and gain the latest skills you need to create your very ow
Take the next steps toward mastering deep learning, the machine learning method that's transforming the world around us by the second. In this practical book, you'll get up to speed on key ideas using Facebook's open source PyTorch framework and gain the latest skills you need to create your very ow
<div><p>This concise, easy-to-use reference puts one of the most popular frameworks for deep learning research and development at your fingertips. Author Joe Papa provides instant access to syntax, design patterns, and code examples to accelerate your development and reduce the time you spend search
This concise, easy-to-use reference puts one of the most popular frameworks for deep learning research and development at your fingertips. Author Joe Papa provides instant access to syntax, design patterns, and code examples to accelerate your development and reduce the time you spend searching for