A fast paced guide that will help you learn about Apache Hadoop 3 and its ecosystem Key Features Set up, configure and get started with Hadoop to get useful insights from large data sets Work with the different components of Hadoop such as MapReduce, HDFS and YARN Learn about the new features introd
Apache Spark Quick Start Guide: Quickly learn the art of writing efficient big data applications with Apache Spark
โ Scribed by Shrey Mehrotra; Akash Grade
- Publisher
- Packt Publishing
- Year
- 2019
- Tongue
- English
- Category
- Library
No coin nor oath required. For personal study only.
โฆ Synopsis
A practical guide for solving complex data processing challenges by applying the best optimizations techniques in Apache Spark.
Key Features
Book Description
Apache Spark is a flexible framework that allows processing of batch and real-time data. Its unified engine has made it quite popular for big data use cases. This book will help you to get started with Apache Spark 2.0 and write big data applications for a variety of use cases.
It will also introduce you to Apache Spark – one of the most popular Big Data processing frameworks. Although this book is intended to help you get started with Apache Spark, but it also focuses on explaining the core concepts.
This practical guide provides a quick start to the Spark 2.0 architecture and its components. It teaches you how to set up Spark on your local machine. As we move ahead, you will be introduced to resilient distributed datasets (RDDs) and DataFrame APIs, and their corresponding transformations and actions. Then, we move on to the life cycle of a Spark application and learn about the techniques used to debug slow-running applications. You will also go through Spark's built-in modules for SQL, streaming, machine learning, and graph analysis.
Finally, the book will lay out the best practices and optimization techniques that are key for writing efficient Spark applications. By the end of this book, you will have a sound fundamental understanding of the Apache Spark framework and you will be able to write and optimize Spark applications.
What you will learn
Who this book is for
If you are a big data enthusiast and love processing huge amount of data, this book is for you. If you are data engineer and looking for the best optimization techniques for your Spark applications, then you will find this book helpful. This book also helps data scientists who want to implement their machine learning algorithms in Spark. You need to have a basic understanding of any one of the programming languages such as Scala, Python or Java.
โฆ Subjects
Computer Technology; Nonfiction; COM018000; COM021030; COM062000
๐ SIMILAR VOLUMES
<p>Gain the key language concepts and programming techniques of Scala in the context of big data analytics and Apache Spark. The book begins by introducing you to Scala and establishes a firm contextual understanding of why you should learn this language, how it stands in comparison to Java, and how
<p><b>Build efficient data flow and machine learning programs with this flexible, multi-functional open-source cluster-computing framework</b></p> <h4>Key Features</h4> <ul><li>Master the art of real-time big data processing and machine learning </li> <li>Explore a wide range of use-cases to analyze
Integrate open source data analytics and build business intelligence on SQL databases with Apache Superset. The quick, intuitive nature for data visualization in a web application makes it easy for creating interactive dashboards.