<p><b>Implement, run, operate, and test data processing pipelines using Apache Beam</b></p><h4>Key Features</h4><ul><li>Understand how to improve usability and productivity when implementing Beam pipelines</li><li>Learn how to use stateful processing to implement complex use cases using Apache Beam<
Building Big Data Pipelines with Apache Beam: Use a single programming model for both batch and stream data processing
β Scribed by Jan LukavskΓ½
- Publisher
- Packt Publishing
- Year
- 2022
- Tongue
- English
- Leaves
- 342
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
Implement, run, operate, and test data processing pipelines using Apache Beam Apache Beam is an open source unified programming model for implementing and executing data processing pipelines, including Extract, Transform, and Load (ETL), batch, and stream processing. This book will help you to confidently build data processing pipelines with Apache Beam. You'll start with an overview of Apache Beam and understand how to use it to implement basic pipelines. You'll also learn how to test and run the pipelines efficiently. As you progress, you'll explore how to structure your code for reusability and also use various Domain Specific Languages (DSLs). Later chapters will show you how to use schemas and query your data using (streaming) SQL. Finally, you'll understand advanced Apache Beam concepts, such as implementing your own I/O connectors. By the end of this book, you'll have gained a deep understanding of the Apache Beam model and be able to apply it to solve problems. This book is for data engineers, data scientists, and data analysts who want to learn how Apache Beam works. Intermediate-level knowledge of the Java programming language is assumed.Key Features
Book Description
What you will learn
Who this book is for
Table of Contents
β¦ Table of Contents
Cover
Title Page
Copyright & Credits
Contributors
Table of Contents
Preface
Section 1Apache Beam: Essentials
Chapter 1: Introduction to Data Processing with Apache Beam
Technical requirements
Why Apache Beam?
Writing your first pipeline
Running our pipeline against streaming data
Exploring the key properties of unbounded data
Measuring event time progress inside data streams
States and triggers
Timers
Assigning data to windows
Defining the life cycle of a state in terms of windows
Pane accumulation
Unifying batch and streaming data processing
Summary
Chapter 2: Implementing, Testing, and Deploying Basic Pipelines
Technical requirements
Setting up the environment for this book
Installing Apache Kafka
Making our code accessible from minikube
Installing Apache Flink
Reinstalling the complete environment
Task 1 β calculating the K most frequent words in a stream of lines of text
Defining the problem
Discussing the problem decomposition
Implementing the solution
Testing our solution
Deploying our solution
Task 2 β calculating the maximal length of a word in a stream
Defining the problem
Discussing the problem decomposition
Implementing the solution
Testing our solution
Deploying our solution
Specifying the PCollection Coder object and the TypeDescriptor object
Understanding default triggers, on time, and closing behavior
Introducing the primitive PTransform object β Combine
Task 3 β calculating the average length of words in a stream
Defining the problem
Discussing the problem decomposition
Implementing the solution
Testing our solution
Deploying our solution
Task 4 β calculating the average length of words in a stream with fixed lookback
Defining the problem
Discussing the problem decomposition
Implementing the solution
Testing our solution
Deploying our solution
Ensuring pipeline upgradability
Task 5 β calculating performance statistics for a sport activity tracking application
Defining the problem
Discussing the problem decomposition
Solution implementation
Testing our solution
Deploying our solution
Introducing the primitive PTransform object β GroupByKey
Introducing the primitive PTransform object β Partition
Summary
Chapter 3: Implementing Pipelines Using Stateful Processing
Technical requirements
Task 6 β Using an external service for data augmentation
Defining the problem
Discussing the problem decomposition
Implementing the solution
Testing our solution
Deploying our solution
Introducing the primitive PTransform object β stateless ParDo
Task 7 β Batching queries to an external RPC service
Defining the problem
Discussing the problem decomposition
Implementing the solution
Task 8 β Batching queries to an external RPC service with defined batch sizes
Defining the problem
Discussing the problem decomposition
Implementing the solution
Introducing the primitive PTransform object β stateful ParDo
Describing the theoretical properties of the stateful ParDo object
Applying the theoretical properties of the stateful ParDo object to the API of DoFn
Using side outputs
Defining droppable data in Beam
Task 9 β Separating droppable data from the rest of the data processing
Defining the problem
Discussing the problem decomposition
Implementing the solution
Testing our solution
Deploying our solution
Task 10 β Separating droppable data from the rest of the data processing, part 2
Defining the problem
Discussing the problem decomposition
Implementing the solution
Testing our solution
Deploying our solution
Using side inputs
Summary
Section 2Apache Beam: Toward Improving Usability
Chapter 4: Structuring Code for Reusability
Technical requirements
Explaining PTransform expansion
Task 11 β enhancing SportTracker by runner motivation using side inputs
Problem definition
Problem decomposition discussion
Solution implementation
Testing our solution
Deploying our solution
Introducing composite transform β CoGroupByKey
Task 12 β enhancing SportTracker by runner motivation using CoGroupByKey
Problem definition
Problem decomposition discussion
Solution implementation
Introducing the Join library DSL
Stream-to-stream joins explained
Task 13 β writing a reusable PTransform β StreamingInnerJoin
Problem definition
Problem decomposition discussion
Solution implementation
Testing our solution
Deploying our solution
Table-stream duality
Summary
Chapter 5: Using SQL for Pipeline Implementation
Technical requirements
Understanding schemas
Attaching a schema to a PCollection
Transforms for PCollections with schemas
Implementing our first streaming pipeline using SQL
Task 14 β implementing SQLMaxWordLength
Problem definition
Problem decomposition discussion
Solution implementation
Task 15 β implementing SchemaSportTracker
Problem definition
Problem decomposition discussion
Solution implementation
Task 16 β implementing SQLSportTrackerMotivation
Problem definition
Problem decomposition discussion
Solution implementation
Further development of Apache Beam SQL
Summary
Chapter 6: Using Your Preferred Language with Portability
Technical requirements
Introducing the portability layer
Portable representation of the pipeline
Job Service
SDK harness
Implementing our first pipelines in the Python SDK
Implementing our first Python pipeline
Implementing our first streaming Python pipeline
Task 17 β implementing MaxWordLength in the Python SDK
Problem definition
Problem decomposition discussion
Solution implementation
Testing our solution
Deploying our solution
Python SDK type hints and coders
Task 18 β implementing SportTracker in the Python SDK
Problem definition
Solution implementation
Testing our solution
Deploying our solution
Task 19 β implementing RPCParDo in the Python SDK
Problem definition
Solution implementation
Testing our solution
Deploying our solution
Task 20 β implementing SportTrackerMotivation in the Python SDK
Problem definition
Solution implementation
Deploying our solution
Using the DataFrame API
Interactive programming using InteractiveRunner
Introducing and using cross-language pipelines
Summary
Section 3Apache Beam: Advanced Concepts
Chapter 7: Extending Apache Beam's I/O Connectors
Technical requirements
Defining splittable DoFn as a unification for bounded and unbounded sources
Task 21 β Implementing our own splittable DoFn β a streaming file source
The problem definition
Discussing the problem decomposition
Implementing the solution
Testing our solution
Deploying our solution
Task 22 β A non-I/O application of splittable DoFn β PiSampler
The problem definition
Discussing the problem decomposition
Implementing the solution
Testing our solution
Deploying our solution
The legacy Source API and the Read transform
Writing a custom data sink
The inherent non-determinism of Apache Beam pipelines
Summary
Chapter 8: Understanding How Runners Execute Pipelines
Describing the anatomy of an Apache Beam runner
Identifying which transforms should be overridden
Explaining the differences between classic and portable runners
Classic runners
Portable pipeline representations
The executable stage concept and the pipeline fusion process
Understanding how a runner handles state
Ensuring fault tolerance
Local state with periodic checkpoints
Remote state
Exploring the Apache Beam capability matrix
Understanding windowing semantics in depth
Merging and non-merging windows
Debugging pipelines and using Apache Beam metrics for observability
Using metrics in the Java SDK
Summary
Index
Other Books You May Enjoy
π SIMILAR VOLUMES
Most data engineers know that performance issues in a distributed computing environment can easily lead to issues impacting the overall efficiency and effectiveness of data engineering tasks. While Python remains a popular choice for data engineering due to its ease of use, Scala shines in scenarios
<p>Gain the key language concepts and programming techniques of Scala in the context of big data analytics and Apache Spark. The book begins by introducing you to Scala and establishes a firm contextual understanding of why you should learn this language, how it stands in comparison to Java, and how
<span>This book explains how to scale Apache Spark 3 to handle massive amounts of data, either via batch or streaming processing. It covers how to use Sparkβs structured APIs to perform complex data transformations and analyses you can use to implement end-to-end analytics workflows.Β This book cover
<span>This book explains how to scale Apache Spark 3 to handle massive amounts of data, either via batch or streaming processing. It covers how to use Sparkβs structured APIs to perform complex data transformations and analyses you can use to implement end-to-end analytics workflows.Β This book cover