<p><span>Get to grips with building and productionizing end-to-end big data solutions in Azure and learn best practices for working with large datasets</span></p><h4><span>Key Features</span></h4><ul><li><span><span>Integrate with Azure Synapse Analytics, Cosmos DB, and Azure HDInsight Kafka Cluster
Azure Databricks Cookbook: Accelerate and scale real-time analytics solutions using the Apache Spark-based analytics service
✍ Scribed by Phani Raj, Vinod Jaiswal
- Publisher
- Packt Publishing
- Year
- 2021
- Tongue
- English
- Leaves
- 452
- Category
- Library
No coin nor oath required. For personal study only.
✦ Synopsis
Get to grips with building and productionizing end-to-end big data solutions in Azure and learn best practices for working with large datasets
Key Features
- Integrate with Azure Synapse Analytics, Cosmos DB, and Azure HDInsight Kafka Cluster to scale and analyze your projects and build pipelines
- Use Databricks SQL to run ad hoc queries on your data lake and create dashboards
- Productionize a solution using CI/CD for deploying notebooks and Azure Databricks Service to various environments
Book Description
Azure Databricks is a unified collaborative platform for performing scalable analytics in an interactive environment. The Azure Databricks Cookbook provides recipes to get hands-on with the analytics process, including ingesting data from various batch and streaming sources and building a modern data warehouse.
The book starts by teaching you how to create an Azure Databricks instance within the Azure portal, Azure CLI, and ARM templates. You'll work through clusters in Databricks and explore recipes for ingesting data from sources, including files, databases, and streaming sources such as Apache Kafka and EventHub. The book will help you explore all the features supported by Azure Databricks for building powerful end-to-end data pipelines. You'll also find out how to build a modern data warehouse by using Delta tables and Azure Synapse Analytics. Later, you'll learn how to write ad hoc queries and extract meaningful insights from the data lake by creating visualizations and dashboards with Databricks SQL. Finally, you'll deploy and productionize a data pipeline as well as deploy notebooks and Azure Databricks service using continuous integration and continuous delivery (CI/CD).
By the end of this Azure book, you'll be able to use Azure Databricks to streamline different processes involved in building data-driven apps.
What you will learn
- Read and write data from and to various Azure resources and file formats
- Build a modern data warehouse with Delta Tables and Azure Synapse Analytics
- Explore jobs, stages, and tasks and see how Spark lazy evaluation works
- Handle concurrent transactions and learn performance optimization in Delta tables
- Learn Databricks SQL and create real-time dashboards in Databricks SQL
- Integrate Azure DevOps for version control, deploying, and productionizing solutions with CI/CD pipelines
- Discover how to use RBAC and ACLs to restrict data access
- Build end-to-end data processing pipeline for near real-time data analytics
Who this book is for
This recipe-based book is for data scientists, data engineers, big data professionals, and machine learning engineers who want to perform data analytics on their applications. Prior experience of working with Apache Spark and Azure is necessary to get the most out of this book.
Table of Contents
- Creating an Azure Databricks Service
- Reading and Writing Data from and to Various Azure Services and File Formats
- Understanding Spark Query Execution
- Working with Streaming Data
- Integrating with Azure Key-Vault, App Configuration and Log Analytics
- Exploring Delta Lake in Azure Databricks
- Implementing Near-Real-Time Analytics and Building Modern Data Warehouse
- Azure Databricks SQL Analytics
- DevOps Integrations and Implementing CI/CD for Azure Databricks
- Understanding Security and Monitoring in Azure Databricks
✦ Table of Contents
Cover
Title Page
Copyright and Credits
Contributors
Table of Contents
Preface
Chapter 1: Creating an Azure Databricks Service
Technical requirements
Creating a Databricks workspace in the Azure portal
Getting ready
How to do it…
How it works…
Creating a Databricks service using the Azure CLI (command-line interface)
Getting ready
How to do it…
How it works…
There's more…
Creating a Databricks service using Azure Resource Manager (ARM) templates
Getting ready
How to do it…
How it works…
Adding users and groups to the workspace
Getting ready
How to do it…
How it works…
There's more…
Creating a cluster from the user interface (UI)
Getting ready
How to do it…
How it works…
There's more…
Getting started with notebooks and jobs in Azure Databricks
Getting ready
How to do it…
How it works…
Authenticating to Databricks using a PAT
Getting ready
How to do it…
How it works…
There's more…
Chapter 2: Reading and Writing Data from and to Various Azure Services and File Formats
Technical requirements
Mounting ADLS Gen2 and Azure Blob storage to Azure DBFS
Getting ready
How to do it…
How it works…
There's more…
Reading and writing data from and to Azure Blob storage
Getting ready
How to do it…
How it works…
There's more…
Reading and writing data from and to ADLS Gen2
Getting ready
How to do it…
How it works…
Reading and writing data from and to an Azure SQL database using native connectors
Getting ready
How to do it…
How it works…
Reading and writing data from and to Azure Synapse SQL (dedicated SQL pool) using native connectors
Getting ready
How to do it…
How it works…
Reading and writing data from and to Azure Cosmos DB
Getting ready
How to do it…
How it works…
Reading and writing data from and to CSV and Parquet
Getting ready
How to do it…
How it works…
Reading and writing data from and to JSON, including nested JSON
Getting ready
How to do it…
How it works…
Chapter 3: Understanding Spark Query Execution
Technical requirements
Introduction to jobs, stages, and tasks
Getting ready
How to do it…
How it works…
Checking the execution details of all the executed Spark queries via the Spark UI
Getting ready
How to do it…
How it works…
Deep diving into schema inference
Getting ready
How to do it…
How it works…
There's more…
Looking into the query execution plan
Getting ready
How to do it…
How it works…
How joins work in Spark
Getting ready
How to do it…
How it works…
There's more…
Learning about input partitions
Getting ready
How to do it…
How it works…
Learning about output partitions
Getting ready
How to do it…
How it works…
Learning about shuffle partitions
Getting ready
How to do it…
How it works…
Storage benefits of different file types
Getting ready
How to do it…
How it works…
Chapter 4: Working with Streaming Data
Technical requirements
Reading streaming data from Apache Kafka
Getting ready
How to do it…
How it works…
Reading streaming data from Azure Event Hubs
Getting ready
How to do it…
How it works…
Reading data from Event Hubs for Kafka
Getting ready
How to do it…
How it works…
Streaming data from log files
Getting ready
How to do it…
How it works…
Understanding trigger options
Getting ready
How to do it…
How it works…
Understanding window aggregation on streaming data
Getting ready
How to do it…
How it works…
Understanding offsets and checkpoints
Getting ready
How to do it…
How it works…
Chapter 5: Integrating with Azure Key Vault, App Configuration, and Log Analytics
Technical requirements
Creating an Azure Key Vault to store secrets using the UI
Getting ready
How to do it…
How it works…
Creating an Azure Key Vault to store secrets using ARM templates
Getting ready
How to do it…
How it works…
Using Azure Key Vault secrets in Azure Databricks
Getting ready
How to do it…
How it works…
Creating an App Configuration resource
Getting ready
How to do it…
How it works…
Using App Configuration in an Azure Databricks notebook
Getting ready
How to do it…
How it works…
Creating a Log Analytics workspace
Getting ready
How to do it…
How it works…
Integrating a Log Analytics workspace with Azure Databricks
Getting ready
How to do it…
How it works…
Chapter 6: Exploring Delta Lake in Azure Databricks
Technical requirements
Delta table operations – create, read, and write
Getting ready
How to do it…
How it works…
There's more…
Streaming reads and writes to Delta tables
Getting ready
How to do it…
How it works…
Delta table data format
Getting ready
How to do it…
How it works…
There's more…
Handling concurrency
Getting ready
How to do it…
How it works…
Delta table performance optimization
Getting ready
How to do it…
How it works…
Constraints in Delta tables
Getting ready
How to do it…
How it works…
Versioning in Delta tables
Getting ready
How to do it…
How it works…
Chapter 7: Implementing Near-Real-Time Analytics and Building a Modern Data Warehouse
Technical requirements
Understanding the scenario for an end-to-end (E2E) solution
Getting ready
How to do it…
How it works…
Creating required Azure resources for the E2E demonstration
Getting ready
How to do it…
How it works…
Simulating a workload for streaming data
Getting ready
How to do it…
How it works…
Processing streaming and batch data using Structured Streaming
Getting ready
How to do it…
How it works…
Understanding the various stages of transforming data
Getting ready
How to do it…
How it works…
Loading the transformed data into Azure Cosmos DB and a Synapse dedicated pool
Getting ready
How to do it…
How it works…
Creating a visualization and dashboard in a notebook for near-real-time analytics
Getting ready
How to do it…
How it works…
Creating a visualization in Power BI for near-real-time analytics
Getting ready
How to do it…
How it works…
Using Azure Data Factory (ADF) to orchestrate the E2E pipeline
Getting ready
How to do it…
How it works…
Chapter 8: Databricks SQL
Technical requirements
How to create a user in Databricks SQL
Getting ready
How to do it…
How it works…
Creating SQL endpoints
Getting ready
How to do it…
How it works…
Granting access to objects to the user
Getting ready
How to do it…
How it works…
Running SQL queries in Databricks SQL
Getting ready
How to do it…
How it works…
Using query parameters and filters
Getting ready
How to do it…
How it works…
Introduction to visualizations in Databricks SQL
Getting ready
How to do it…
Creating dashboards in Databricks SQL
Getting ready
How to do it…
How it works…
Connecting Power BI to Databricks SQL
Getting ready
How to do it…
Chapter 9: DevOps Integrations and Implementing CI/CD for Azure Databricks
Technical requirements
How to integrate Azure DevOps with an Azure Databricks notebook
Getting ready
How to do it…
Using GitHub for Azure Databricks notebook version control
Getting ready
How to do it…
How it works…
Understanding the CI/CD process for Azure Databricks
Getting ready
How to do it…
How it works…
How to set up an Azure DevOps pipeline for deploying notebooks
Getting ready
How to do it…
How it works…
Deploying notebooks to multiple environments
Getting ready
How to do it…
How it works…
Enabling CI/CD in an Azure DevOps build and release pipeline
Getting ready
How to do it…
Deploying an Azure Databricks service using an Azure DevOps release pipeline
Getting ready
How to do it…
Chapter 10: Understanding Security and Monitoring in Azure Databricks
Technical requirements
Understanding and creating RBAC in Azure for ADLS Gen-2
Getting ready
How to do it…
Creating ACLs using Storage Explorer and PowerShell
Getting ready
How to do it…
How it works…
How to configure credential passthrough
Getting ready
How to do it…
How to restrict data access to users using RBAC
Getting ready
How to do it…
How to restrict data access to users using ACLs
Getting ready
How to do it…
Deploying Azure Databricks in a VNet and accessing a secure storage account
Getting ready
How to do it…
There's more…
Using Ganglia reports for cluster health
Getting ready
How to do it…
Cluster access control
Getting ready
How to do it…
About Packt
Other Books You May Enjoy
Index
📜 SIMILAR VOLUMES
<p>Analyze vast amounts of data in record time using Apache Spark with Databricks in the Cloud. Learn the fundamentals, and more, of running analytics on large clusters in Azure and AWS, using Apache Spark with Databricks on top. Discover how to squeeze the most value out of your data at a mere frac
<p>Analyze vast amounts of data in record time using Apache Spark with Databricks in the Cloud. Learn the fundamentals, and more, of running analytics on large clusters in Azure and AWS, using Apache Spark with Databricks on top. Discover how to squeeze the most value out of your data at a mere frac
<p><p>Analyze vast amounts of data in record time using Apache Spark with Databricks in the Cloud. Learn the fundamentals, and more, of running analytics on large clusters in Azure and AWS, using Apache Spark with Databricks on top. Discover how to squeeze the most value out of your data at a mere f
Work through 70 recipes for implementing reliable data pipelines with Apache Spark, optimally store and process structured and unstructured data in Delta Lake, and use Databricks to orchestrate and govern your data Key Features Learn data ingestion, data transformation, and data management techniqu
Learn the right cutting-edge skills and knowledge to leverage Spark Streaming to implement a wide array of real-time, streaming applications. Pro Spark Streaming walks you through end-to-end real-time application development using real-world applications, data, and code. Taking an application-first