Build a microservices application from scratch, layer by layer. This book teaches the tools and techniques you need. In Bootstrapping Microservices, Second Edition you’ll get hands-on experience with microservices development skills like: • Creating, configuring, and running a microservice with
Bootstrapping Microservices : With Docker, Kubernetes, GitHub Actions, and Terraform
✍ Scribed by Ashley Davis
- Publisher
- Manning Publications Co.
- Year
- 2024
- Tongue
- English
- Leaves
- 463
- Category
- Library
No coin nor oath required. For personal study only.
✦ Synopsis
Build a microservices application from scratch, layer by layer. This book teaches the tools and techniques you need.
In Bootstrapping Microservices, Second Edition you’ll get hands-on experience with microservices development skills like
Creating, configuring, and running a microservice with Node.js
Building and publishing a microservice using Docker
Applying automated testing
Running a microservices application in development with Docker Compose
Deploying microservices to a production Kubernetes cluster
Implementing infrastructure as code and setting up a continuous delivery pipeline
Monitoring, managing, and troubleshooting
Bootstrapping Microservices with Docker, Kubernetes, and Terraform has helped thousands of developers create their first microservices applications. This fully revised second edition introduces the industry-standard tools and practical skills you’ll use for every microservices application. Author Ashley Davis’s friendly advice and guidance helps you make pragmatic choices that will cut down the learning curve for Docker, Terraform, and Kubernetes.
About the technology
Taking a microservices application from proof of concept to production is a multi-step operation that relies on tools like Docker, Terraform, and Kubernetes. The best way to learn the whole process is to build a project from the ground up. That’s exactly what you’ll do in this book!
About the book
Bootstrapping Microservices, Second Edition is a guide to microservices and cloud-native distributed applications. It demystifies technical choices and gives you a clear, comprehensive approach to building microservices. In it, you’ll learn how to configure cloud infrastructure with Terraform, package microservices using Docker, and deploy your finished project to a Kubernetes cluster.
As you go, you’ll build your own video streaming service to see how everything fits together in a complete application. Plus, this fully revised new edition contains updated coverage of continuous delivery for GitHub Actions. It also includes expanded coverage of Kubernetes, including an easy guide to Kuberbetes deployment along with guidance for implementing infrastructure as code.
About the reader
Examples are in JavaScript. No experience with microservices, Kubernetes, Terraform, or Docker required.
About the author
Ashley Davis is a software craftsman, entrepreneur, and author with over 25 years of experience in software development—from coding, to managing teams, to founding companies. He has worked for a range of companies, from the tiniest startups to the largest internationals. Along the way, he has contributed back to the community through his writing and open source coding. He is currently VP of Engineering at Hone, building products on the Algorand blockchain. He is also the creator of Data-Forge Notebook, a desktop application for exploratory coding and data visualization using JavaScript and TypeScript.
✦ Table of Contents
Bootstrapping Microservices, Second Edition
Praise for the first edition
brief contents
contents
preface
acknowledgments
about this book
Who should read this book?
How this book is organized: A road map
Changes since the first edition
About the code
liveBook discussion forum
Staying up to date
about the author
about the cover illustration
Chapter 1: Why microservices?
1.1 This book is practical
1.2 What will you learn?
1.3 What do you need to know?
1.4 Managing complexity
1.5 What is a microservice?
1.6 What is a microservices application?
1.7 What’s wrong with the monolith?
1.8 Why are microservices popular now?
1.9 Benefits of microservices
1.10 Drawbacks of microservices
1.10.1 Higher-level technical skills
1.10.2 Building distributed applications is hard
1.10.3 Microservices have scalable difficulty
1.10.4 People often fear complexity
1.10.5 Bringing the pain forward
1.11 Modern tooling for microservices
1.12 Not just microservices
1.13 The spectrum of possibilities
1.14 Designing a microservices application
1.14.1 Software design
1.14.2 Design principles
1.14.3 Domain-driven design
1.14.4 Don’t repeat yourself
1.14.5 How much to put in each microservice
1.14.6 Learning more about design
1.15 An example application
Chapter 2: Creating your first microservice
2.1 New tools
2.2 Getting the code
2.3 Why Node.js?
2.4 Our philosophy of development
2.5 Establishing our single-service development environment
2.5.1 Installing Git
2.5.2 Cloning the code repository
2.5.3 Getting VS Code
2.5.4 Installing Node.js
2.6 Building an HTTP server for video streaming
2.6.1 Creating a Node.js project
2.6.2 Installing Express
2.6.3 Creating the Express boilerplate
2.6.4 Running our simple web server
2.6.5 Adding streaming video
2.6.6 Configuring our microservice
2.6.7 Setting up for production
2.6.8 Live reloading for fast iteration
2.6.9 Running the finished code from this chapter
2.7 Node.js review
2.8 Continue your learning
Chapter 3: Publishing your first microservice
3.1 New tool: Docker
3.2 Getting the code
3.3 What is a container?
3.4 What is an image?
3.5 Why Docker?
3.6 Why do we need Docker?
3.7 Adding Docker to our development environment
3.7.1 Installing Docker
3.7.2 Checking your Docker installation
3.8 Packaging our microservice
3.8.1 Creating a Dockerfile
3.8.2 Packaging and checking our Docker image
3.8.3 Booting our microservice in a container
3.8.4 Debugging the container
3.8.5 Stopping the container
3.9 Publishing our microservice
3.9.1 Creating a private container registry
3.9.2 Pushing our microservice to the registry
3.9.3 Booting our microservice from the registry
3.9.4 Deleting your container registry
3.10 Docker review
3.11 Continue your learning
Chapter 4: Data management for microservices
4.1 New tools
4.2 Getting the code
4.3 Developing microservices with Docker Compose
4.3.1 Why Docker Compose?
4.3.2 Creating our Docker Compose file
4.3.3 Booting our microservices application
4.3.4 Working with the application
4.3.5 Shutting down the application
4.3.6 Why Docker Compose for development, but not production?
4.4 Adding file storage to our application
4.4.1 Using Azure Storage
4.4.2 Updating the video-streaming microservice
4.4.3 Adding our new microservice to the Docker Compose file
4.4.4 Testing the updated application
4.4.5 Cloud storage vs. cluster storage
4.4.6 What did we achieve?
4.5 Adding a database to our application
4.5.1 Why MongoDB?
4.5.2 Adding a database server in development
4.5.3 Adding a database server in production
4.5.4 Database-per-microservice or database-per-application?
4.5.5 What did we achieve?
4.6 Docker Compose review
4.7 Continue your learning
Chapter 5: Communication between microservices
5.1 New and familiar tools
5.2 Getting the code
5.3 Getting our microservices talking
5.4 Introducing the history microservice
5.5 Live reload for fast iterations
5.5.1 Creating a stub for the history microservice
5.5.2 Augmenting the microservice for live reload
5.5.3 Splitting our Dockerfile for development and production
5.5.4 Updating the Docker Compose file for live reload
5.5.5 Trying out live reload
5.5.6 Testing production mode in development
5.5.7 What have we achieved?
5.6 Methods of communication for microservices
5.6.1 Direct messaging
5.6.2 Indirect messaging
5.7 Direct messaging with HTTP
5.7.1 Why HTTP?
5.7.2 Directly targeting messages at particular microservices
5.7.3 Sending a message with HTTP POST
5.7.4 Receiving a message with HTTP POST
5.7.5 Testing the updated application
5.7.6 Orchestrating behavior with direct messages
5.7.7 What have we achieved?
5.8 Indirect messaging with RabbitMQ
5.8.1 Why RabbitMQ?
5.8.2 Indirectly targeting messages to microservices
5.8.3 Creating a RabbitMQ server
5.8.4 Investigating the RabbitMQ dashboard
5.8.5 Connecting our microservice to the message queue
5.8.6 Single-recipient indirect messaging
5.8.7 Multiple-recipient messages
5.8.8 Emergent behavior with indirect messages
5.8.9 What have we achieved?
5.9 Microservices communication review
5.10 Continue your learning
Chapter 6: The road to production
6.1 New tools
6.2 Getting the code
6.3 Going to production
6.4 Hosting microservices on Kubernetes
6.4.1 Why Kubernetes?
6.4.2 Pods, nodes, and containers
6.4.3 Pods, deployments, and services
6.5 Enabling your local Kubernetes instance
6.6 Installing the Kubernetes CLI
6.7 Project structure
6.8 Deploying to the local Kubernetes instance
6.8.1 Building the image for the microservice
6.8.2 No container registry needed (yet)
6.8.3 Creating configuration for deployment to a local Kubernetes instance
6.8.4 Connecting kubectl to local Kubernetes
6.8.5 Deploying a microservice to local Kubernetes
6.8.6 Testing the locally deployed microservice
6.8.7 Deleting the deployment
6.8.8 Why not use local Kubernetes for development?
6.8.9 What have we achieved?
6.9 Creating a managed Kubernetes cluster in Azure
6.10 Working with the Azure CLI
6.10.1 Installing the Azure CLI
6.10.2 Authenticating the Azure CLI
6.10.3 Connecting kubectl to Kubernetes
6.11 Deploying to the production cluster
6.11.1 Now we need a container registry
6.11.2 Publishing the image to the container registry
6.11.3 Connecting the container registry to the Kubernetes cluster
6.11.4 Creating a configuration for deployment to Kubernetes
6.11.5 Deploying the microservice to Kubernetes
6.11.6 Testing the deployed microservice
6.11.7 Deleting the deployment
6.11.8 Destroying your infrastructure
6.11.9 What have we achieved?
6.12 Azure CLI tool review
6.13 Kubectl review
6.14 Continue your learning
Chapter 7: Infrastructure as code
7.1 New tool
7.2 Getting the code
7.3 Prototyping our infrastructure
7.4 Infrastructure as code
7.5 Authenticate with your Azure account
7.6 Which version of Kubernetes?
7.7 Creating the infrastructure with Terraform
7.7.1 Why Terraform?
7.7.2 Installing Terraform
7.7.3 Terraform project setup
7.8 Creating an Azure resource group for your application
7.8.1 Evolutionary architecture with Terraform
7.8.2 Scripting infrastructure creation
7.8.3 Fixing provider version numbers
7.8.4 Initializing Terraform
7.8.5 By-products of Terraform initialization
7.8.6 Building your infrastructure
7.8.7 Understanding Terraform state
7.8.8 Destroying and recreating our infrastructure
7.8.9 What have we achieved?
7.9 Creating our container registry
7.9.1 Continuing the evolution of our infrastructure
7.9.2 Creating the container registry
7.9.3 Terraform outputs
7.9.4 Outputting sensitive values from Terraform
7.9.5 Just don’t output sensitive values
7.9.6 Getting the details of your container registry
7.9.7 What have we achieved?
7.10 Refactoring to share configuration data
7.10.1 Continuing the evolution of our infrastructure
7.10.2 Introducing Terraform variables
7.11 Creating our Kubernetes cluster
7.11.1 Scripting creation of your cluster
7.11.2 Attaching the registry to the cluster
7.11.3 Building our cluster
7.11.4 What have we achieved?
7.12 Deploying to our cluster
7.13 Destroying our infrastructure
7.14 Terraform review
7.15 Continue your learning
Chapter 8: Continuous deployment
8.1 New tool
8.2 Getting the code
8.3 Running the examples in this chapter
8.4 What is continuous integration?
8.5 What is continuous deployment?
8.6 Why automate deployment?
8.7 An introduction to automation with GitHub Actions
8.7.1 Why GitHub Actions?
8.7.2 What is a workflow?
8.7.3 Creating a new workflow
8.7.4 Example 1 overview
8.7.5 The “Hello World” shell script
8.7.6 The “Hello World” workflow
8.7.7 Invoking commands inline
8.7.8 Triggering a workflow by code change
8.7.9 Workflow history
8.7.10 Triggering a workflow through the UI
8.7.11 What have we achieved?
8.8 Implementing continuous integration
8.8.1 Example 2 overview
8.8.2 A workflow for automated tests
8.8.3 What have we achieved?
8.9 Continuous deployment for a microservice
8.9.1 Example 3 overview
8.9.2 Templating our deployment configuration
8.9.3 Manual deployment precedes automated deployment
8.9.4 A workflow to deploy our microservice
8.9.5 Authenticating kubectl
8.9.6 Installing and configuring kubectl
8.9.7 Environment variables from GitHub secrets
8.9.8 Environment variables from GitHub context variables
8.9.9 Adding GitHub secrets
8.9.10 Debugging your deployment pipeline
8.9.11 Deploying directly to production is dangerous
8.9.12 What have we achieved?
8.10 Continue your learning
Chapter 9: Automated testing for microservices
9.1 New tools
9.2 Getting the code
9.3 Testing for microservices
9.4 Automated testing
9.5 Automated testing with Jest
9.5.1 Why Jest?
9.5.2 Setting up Jest
9.5.3 The math library to test
9.5.4 Our first Jest test
9.5.5 Running our first test
9.5.6 Live reload with Jest
9.5.7 Interpreting test failures
9.5.8 Invoking Jest with npm
9.5.9 Populating our test suite
9.5.10 Mocking with Jest
9.5.11 What have we achieved?
9.6 Unit testing for microservices
9.6.1 The metadata microservice
9.6.2 Creating unit tests with Jest
9.6.3 Running the tests
9.6.4 What have we achieved?
9.7 Integration testing
9.7.1 The code to test
9.7.2 Running a MongoDB database
9.7.3 Loading database fixtures
9.7.4 Creating an integration test with Jest
9.7.5 Running the test
9.7.6 What have we achieved?
9.8 End-to-end testing
9.8.1 Why Playwright?
9.8.2 Installing Playwright
9.8.3 Setting up database fixtures
9.8.4 Booting your application
9.8.5 Creating an end-to-end test with Playwright
9.8.6 Invoking Playwright with npm
9.8.7 What have we achieved?
9.9 Automated testing in the CI/CD pipeline
9.10 Review of testing
9.11 Continue your learning
Chapter 10: Shipping FlixTube
10.1 No new tools!
10.2 Getting the code
10.3 Revisiting essential skills
10.4 Overview of FlixTube
10.4.1 FlixTube microservices
10.4.2 Microservice project structure
10.4.3 The FlixTube monorepo
10.5 Running FlixTube in development
10.5.1 Booting an individual microservice
10.5.2 Booting the entire FlixTube application
10.6 Testing FlixTube in development
10.6.1 Testing a microservice with Jest
10.6.2 Testing the application with Playwright
10.7 FlixTube deep dive
10.7.1 Database fixtures
10.7.2 Mocking the storage microservice
10.7.3 The gateway
10.7.4 The FlixTube UI
10.7.5 Video streaming
10.7.6 Video upload
10.8 Deploying FlixTube to our local Kubernetes
10.8.1 Prerequisites for local deployment
10.8.2 Local deployment
10.8.3 Testing the local deployment
10.8.4 Deleting the local deployment
10.9 Manually deploying FlixTube to production
10.9.1 Prerequisites for production deployment
10.9.2 Production deployment
10.9.3 Testing the production deployment
10.9.4 Destroying the production deployment
10.10 Continuous deployment to production
10.10.1 Prerequisites for continuous deployment
10.10.2 Setting up your own code repository
10.10.3 Deploying infrastructure
10.10.4 One CD pipeline per microservice
10.10.5 Testing the CD pipeline
10.11 FlixTube in the future
10.12 Continue your learning
Chapter 11: Healthy microservices
11.1 Maintaining healthy microservices
11.2 Monitoring and managing microservices
11.2.1 Logging in development
11.2.2 Error handling
11.2.3 Logging with Docker Compose
11.2.4 Basic logging with Kubernetes
11.2.5 Kubernetes log aggregation
11.2.6 Enterprise logging, monitoring, and alerts
11.2.7 Observability for microservices
11.2.8 Automatic restarts with Kubernetes health checks
11.3 Debugging microservices
11.3.1 The debugging process
11.3.2 Debugging production microservices
11.4 Reliability and recovery
11.4.1 Practicing defensive programming
11.4.2 Practicing defensive testing
11.4.3 Protecting our data
11.4.4 Replication and redundancy
11.4.5 Fault isolation and graceful degradation
11.4.6 Simple techniques for fault tolerance
11.4.7 Advanced techniques for fault tolerance
11.5 Continue your learning
Chapter 12: Pathways to scalability
12.1 Our future is scalable
12.2 Scaling the development process
12.2.1 Multiple teams
12.2.2 Independent code repositories
12.2.3 Splitting the code repository
12.2.4 The meta-repo
12.2.5 Creating multiple environments
12.2.6 Production workflow
12.2.7 Separating application configuration from microservices configuration
12.3 Scaling performance
12.3.1 Vertically scaling the cluster
12.3.2 Horizontally scaling the cluster
12.3.3 Horizontally scaling an individual microservice
12.3.4 Elastic scaling for the cluster
12.3.5 Elastic scaling for an individual microservice
12.3.6 Scaling the database
12.3.7 Don’t scale too early
12.4 Mitigating problems caused by changes
12.4.1 Automated testing and deployment
12.4.2 Branch protection
12.4.3 Deploying to our test environment
12.4.4 Rolling updates
12.4.5 Blue-green deployments
12.5 Basic security
12.5.1 Trust models
12.5.2 Sensitive configuration
12.6 Refactoring to microservices
12.6.1 Do you really need microservices?
12.6.2 Plan your conversion and involve everyone
12.6.3 Know your legacy code
12.6.4 Improve your automation
12.6.5 Build your microservices platform
12.6.6 Carve along natural seams
12.6.7 Prioritize the extraction
12.6.8 And repeat . . .
12.7 The spectrum of possibilities
12.7.1 It doesn’t have to be perfect
12.7.2 The diminishing return on investment
12.7.3 The hybrid approach
12.8 Microservices on a budget
12.9 From simple beginnings . . .
12.10 Continue your learning
index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
W
X
Y
Z
📜 SIMILAR VOLUMES
The best way to learn microservices development is to build something! Bootstrapping Microservices with Docker, Kubernetes, and Terraform guides you from zero through to a complete microservices project, including fast prototyping, development, and deployment. You’ll get your feet wet using industry
The best way to learn microservices development is to build something! Bootstrapping Microservices with Docker, Kubernetes, and Terraform guides you from zero through to a complete microservices project, including fast prototyping, development, and deployment. You’ll get your feet wet using industry
Start using Kubernetes in complex big data and enterprise applications, including Docker containers. Starting with installing Kubernetes on a single node, the book introduces Kubernetes with a simple Hello example and discusses using environment variables in Kubernetes.<br />Next,<i>Kubernetes Micro
Start using Kubernetes in complex big data and enterprise applications, including Docker containers. Starting with installing Kubernetes on a single node, the book introduces Kubernetes with a simple Hello example and discusses using environment variables in Kubernetes.<br />Next,<i>Kubernetes Micro
This book on Kubernetes, a container cluster manager, discusses all aspects of using Kubernetes in today's complex big data and enterprise applications, with Docker containers. Starting with installing Kubernetes on a single node, Kubernetes Microservices with Docker introduces Kubernetes with a si