Field-tested tips, tricks, and design patterns for building machine learning projects that are deployable, maintainable, and secure from concept to production. In Machine Learning Engineering in Action, you will learn: • Evaluating data science problems to find the most effective solution • Sco
Machine Learning Engineering in Action
✍ Scribed by Ben Wilson
- Publisher
- Manning Publications
- Year
- 2022
- Tongue
- English
- Leaves
- 578
- Edition
- 1
- Category
- Library
No coin nor oath required. For personal study only.
✦ Synopsis
Machine Learning Engineering in Actionlays out an approach to building deployable, maintainable production machine learning systems. You will adopt software development standards that deliver better code management, and make it easier to test, scale, and even reuse your machine learning code!
You will learn how to plan and scope your project, manage cross-team logistics that avoid fatal communication failures, and design your code’s architecture for improved resilience. You will even discover when not to use machine learning―and the alternative approaches that might be cheaper and more effective. When you’re done working through this toolbox guide, you will be able to reliably deliver cost-effective solutions for organizations big and small alike.
Following established processes and methodology maximizes the likelihood that your machine learning projects will survive and succeed for the long haul. By adopting standard, reproducible practices, your projects will be maintainable over time and easy for new team members to understand and adapt.
✦ Table of Contents
Machine Learning Engineering in Action
brief contents
contents
preface
acknowledgments
about this book
Who should read this book
How this book is organized: A road map
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1—An introduction to machine learning engineering
1 What is a machine learning engineer?
1.1 Why ML engineering?
1.2 The core tenets of ML engineering
1.2.1 Planning
1.2.2 Scoping and research
1.2.3 Experimentation
1.2.4 Development
1.2.5 Deployment
1.2.6 Evaluation
1.3 The goals of ML engineering
Summary
2 Your data science could use some engineering
2.1 Augmenting a complex profession with processes to increase project success
2.2 A foundation of simplicity
2.3 Co-opting principles of Agile software engineering
2.3.1 Communication and cooperation
2.3.2 Embracing and expecting change
2.4 The foundation of ML engineering
Summary
3 Before you model: Planning and scoping a project
3.1 Planning: You want me to predict what?!
3.1.1 Basic planning for a project
3.1.2 That first meeting
3.1.3 Plan for demos—lots of demos
3.1.4 Experimentation by solution building: Wasting time for pride’s sake
3.2 Experimental scoping: Setting expectations and boundaries
3.2.1 What is experimental scoping?
3.2.2 Experimental scoping for the ML team: Research
3.2.3 Experimental scoping for the ML team: Experimentation
Summary
4 Before you model: Communication and logistics of projects
4.1 Communication: Defining the problem
4.1.1 Understanding the problem
4.1.2 Setting critical discussion boundaries
4.2 Don’t waste our time: Meeting with cross-functional teams
4.2.1 Experimental update meeting: Do we know what we’re doing here?
4.2.2 SME review/prototype review: Can we solve this?
4.2.3 Development progress review(s): Is this thing going to work?
4.2.4 MVP review: Did you build what we asked for?
4.2.5 Preproduction review: We really hope we didn’t screw this up
4.3 Setting limits on your experimentation
4.3.1 Set a time limit
4.3.2 Can you put this into production? Would you want to maintain it?
4.3.3 TDD vs. RDD vs. PDD vs. CDD for ML projects
4.4 Planning for business rules chaos
4.4.1 Embracing chaos by planning for it
4.4.2 Human-in-the-loop design
4.4.3 What’s your backup plan?
4.5 Talking about results
Summary
5 Experimentation in action: Planning and researching an ML project
5.1 Planning experiments
5.1.1 Perform basic research and planning
5.1.2 Forget the blogs—read the API docs
5.1.3 Draw straws for an internal hackathon
5.1.4 Level the playing field
5.2 Performing experimental prep work
5.2.1 Performing data analysis
5.2.2 Moving from script to reusable code
5.2.3 One last note on building reusable code for experimentation
Summary
6 Experimentation in action: Testing and evaluating a project
6.1 Testing ideas
6.1.1 Setting guidelines in code
6.1.2 Running quick forecasting tests
6.2 Whittling down the possibilities
6.2.1 Evaluating prototypes properly
6.2.2 Making a call on the direction to go in
6.2.3 So . . . what’s next?
Summary
7 Experimentation in action: Moving from prototype to MVP
7.1 Tuning: Automating the annoying stuff
7.1.1 Tuning options
7.1.2 Hyperopt primer
7.1.3 Using Hyperopt to tune a complex forecasting problem
7.2 Choosing the right tech for the platform and the team
7.2.1 Why Spark?
7.2.2 Handling tuning from the driver with SparkTrials
7.2.3 Handling tuning from the workers with a pandas_udf
7.2.4 Using new paradigms for teams: Platforms and technologies
Summary
8 Experimentation in action: Finalizing an MVP with MLflow and runtime optimization
8.1 Logging: Code, metrics, and results
8.1.1 MLflow tracking
8.1.2 Please stop printing and log your information
8.1.3 Version control, branch strategies, and working with others
8.2 Scalability and concurrency
8.2.1 What is concurrency?
8.2.2 What you can (and can’t) run asynchronously
Summary
Part 2—Preparing for production: Creating maintainable ML
9 Modularity for ML: Writing testable and legible code
9.1 Understanding monolithic scripts and why they are bad
9.1.1 How monoliths come into being
9.1.2 Walls of text
9.1.3 Considerations for monolithic scripts
9.2 Debugging walls of text
9.3 Designing modular ML code
9.4 Using test-driven development for ML
Summary
10 Standards of coding and creating maintainable ML code
10.1 ML code smells
10.2 Naming, structure, and code architecture
10.2.1 Naming conventions and structure
10.2.2 Trying to be too clever
10.2.3 Code architecture
10.3 Tuple unpacking and maintainable alternatives
10.3.1 Tuple unpacking example
10.3.2 A solid alternative to tuple unpacking
10.4 Blind to issues: Eating exceptions and other bad practices
10.4.1 Try/catch with the precision of a shotgun
10.4.2 Exception handling with laser precision
10.4.3 Handling errors the right way
10.5 Use of global mutable objects
10.5.1 How mutability can burn you
10.5.2 Encapsulation to prevent mutable side effects
10.6 Excessively nested logic
Summary
11 Model measurement and why it’s so important
11.1 Measuring model attribution
11.1.1 Measuring prediction performance
11.1.2 Clarifying correlation vs. causation
11.2 Leveraging A/B testing for attribution calculations
11.2.1 A/B testing 101
11.2.2 Evaluating continuous metrics
11.2.3 Using alternative displays and tests
11.2.4 Evaluating categorical metrics
Summary
12 Holding on to your gains by watching for drift
12.1 Detecting drift
12.1.1 What influences drift?
12.2 Responding to drift
12.2.1 What can we do about it?
12.2.2 Responding to drift
Summary
13 ML development hubris
13.1 Elegant complexity vs. overengineering
13.1.1 Lightweight scripted style (imperative)
13.1.2 An overengineered mess
13.2 Unintentional obfuscation: Could you read this if you didn’t write it?
13.2.1 The flavors of obfuscation
13.2.2 Troublesome coding habits recap
13.3 Premature generalization, premature optimization, and other bad ways to show how smart you are
13.3.1 Generalization and frameworks: Avoid them until you can’t
13.3.2 Optimizing too early
13.4 Do you really want to be the canary? Alpha testing and the dangers of the open source coal mine
13.5 Technology-driven development vs. solution-driven development
Summary
Part 3—Developing production machine learning code
14 Writing production code
14.1 Have you met your data?
14.1.1 Make sure you have the data
14.1.2 Check your data provenance
14.1.3 Find a source of truth and align on it
14.1.4 Don’t embed data cleansing into your production code
14.2 Monitoring your features
14.3 Monitoring everything else in the model life cycle
14.4 Keeping things as simple as possible
14.4.1 Simplicity in problem definitions
14.4.2 Simplicity in implementation
14.5 Wireframing ML projects
14.6 Avoiding cargo cult ML behavior
Summary
15 Quality and acceptance testing
15.1 Data consistency
15.1.1 Training and inference skew
15.1.2 A brief intro to feature stores
15.1.3 Process over technology
15.1.4 The dangers of a data silo
15.2 Fallbacks and cold starts
15.2.1 Leaning heavily on prior art
15.2.2 Cold-start woes
15.3 End user vs. internal use testing
15.3.1 Biased testing
15.3.2 Dogfooding
15.3.3 SME evaluation
15.4 Model interpretability
15.4.1 Shapley additive explanations
15.4.2 Using shap
Summary
16 Production infrastructure
16.1 Artifact management
16.1.1 MLflow’s model registry
16.1.2 Interfacing with the model registry
16.2 Feature stores
16.2.1 What a feature store is used for
16.2.2 Using a feature store
16.2.3 Evaluating a feature store
16.3 Prediction serving architecture
16.3.1 Determining serving needs
16.3.2 Bulk external delivery
16.3.3 Microbatch streaming
16.3.4 Real-time server-side
16.3.5 Integrated models (edge deployment)
Summary
Appendix A—Big O(no) and how to think about runtime performance
A.1 What is Big O, anyway?
A.1.1 A gentle introduction to complexity
A.2 Complexity by example
A.2.1 O(1): The “It doesn’t matter how big the data is” algorithm
A.2.2 O(n): The linear relationship algorithm
A.2.3 O(n2): A polynomial relationship to the size of the collection
A.3 Analyzing decision-tree complexity
A.4 General algorithmic complexity for ML
Appendix B—Setting up a development environment
B.1 The case for a clean experimentation environment
B.2 Containers to deal with dependency hell
B.3 Creating a container-based pristine environment for experimentation
index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
📜 SIMILAR VOLUMES
Field-tested tips, tricks, and design patterns for building machine learning projects that are deployable, maintainable, and secure from concept to production. In Machine Learning Engineering in Action, you will learn: • Evaluating data science problems to find the most effective solution • Sco
<DIV><p><b>Summary</b></p><p><i>Machine Learning in Action</i> is unique book that blends the foundational theories of machine learning with the practical realities of building tools for everyday data analysis. You'll use the flexible Python programming language to build programs that implement algo
<DIV><p><b>Summary</b></p><p><i>Machine Learning in Action</i> is unique book that blends the foundational theories of machine learning with the practical realities of building tools for everyday data analysis. You'll use the flexible Python programming language to build programs that implement algo
A machine is said to learn when its performance improves with experience. Learning requires algorithms and programs that capture data and ferret out the interesting or useful patterns. Once the specialized domain of analysts and mathematicians, machine learning is becoming a skill needed by many. M