Introduction to Spark Programming
Course Description
This course introduces the Apache Spark distributed computing engine, and is suitable for developers, data analysts, architects, technical managers, and anyone who needs to use Spark in a hands-on manner.
The course provides a solid technical introduction to the Spark architecture and how Spark works. It covers the basic building blocks of Spark (e.g. RDDs and the distributed compute engine), as well as higher-level constructs that provide a simpler and more capable interface (e.g. Spark SQL and DataFrames). It also covers more advanced capabilities such as the use of Spark Streaming to process streaming data, and provides an overview of Spark GraphX (graph processing) and Spark MLlib (machine learning). Finally, the course explores possible performance issues and strategies for optimization.
The course is very hands-on, with many labs. Participants will interact with Spark through the Spark shell (for interactive, ad-hoc processing) as well as through programs using the Spark API . Labs currently support Scala - contact us for Python/Java support.
The Apache Spark distributed computing engine is rapidly becoming a primary tool in the processing and analyzing of large-scale data sets. It has many advantages over existing engines, such as Hadoop, including runtime speeds that are 10-100x faster, as well as a much simpler programming model. After taking this course, you will be ready to work with Spark in an informed and productive manner.
The course provides a solid technical introduction to the Spark architecture and how Spark works. It covers the basic building blocks of Spark (e.g. RDDs and the distributed compute engine), as well as higher-level constructs that provide a simpler and more capable interface (e.g. Spark SQL and DataFrames). It also covers more advanced capabilities such as the use of Spark Streaming to process streaming data, and provides an overview of Spark GraphX (graph processing) and Spark MLlib (machine learning). Finally, the course explores possible performance issues and strategies for optimization.
The course is very hands-on, with many labs. Participants will interact with Spark through the Spark shell (for interactive, ad-hoc processing) as well as through programs using the Spark API . Labs currently support Scala - contact us for Python/Java support.
The Apache Spark distributed computing engine is rapidly becoming a primary tool in the processing and analyzing of large-scale data sets. It has many advantages over existing engines, such as Hadoop, including runtime speeds that are 10-100x faster, as well as a much simpler programming model. After taking this course, you will be ready to work with Spark in an informed and productive manner.
3 days
Contact us for pricing
Prerequisites
Reasonable programming experience. An overview of Scala is provided for those who don`t know itKnowledge and Skills Gained
Understand the need for Spark in data processingUnderstand the Spark architecture and how it distributes computations to cluster nodes
Be familiar with basic installation / setup / layout of Spark
Use the Spark shell for interactive and ad-hoc operations
Understand RDDs (Resilient Distributed Datasets), and data partitioning, pipelining, and computations
Understand/use RDD ops such as map(), filter(), reduce(), groupByKey(), join(), etc.
Understand Spark's data caching and its usage
Write/run standalone Spark programs with the Spark API
Use Spark SQL / DataFrames to efficiently process structured data
Use Spark Streaming to process streaming (real-time) data
Understand performance implications and optimizations when using Spark
Be familiar with Spark GraphX and MLlib
Scala Ramp Up
Scala Introduction, Variables, Data Types, Control FlowThe Scala Interpreter
Collections and their Standard Methods (e.g. map())
Functions, Methods, Function Literals
Class, Object, Trait
Introduction to Spark
Overview, Motivations, Spark SystemsSpark Ecosystem
Spark vs. Hadoop
Acquiring and Installing Spark
The Spark Shell
RDDs and Spark Architecture
RDD Concepts, Lifecycle, Lazy EvaluationRDD Partitioning and Transformations
Working with RDDs - Creating and Transforming (map, filter, etc.)
Key-Value Pairs - Definition, Creation, and Operations
Caching - Concepts, Storage Type, Guidelines
Spark API
Overview, Basic Driver Code, SparkConfCreating and Using a SparkContext
RDD API
Building and Running Applications
Application Lifecycle
Cluster Managers
Logging and Debugging
Spark SQL
Introduction and UsageDataFrames and SQLContext
Working with JSON
Querying - The DataFrame DSL, and SQL
Data Formats
Spark Streaming
Overview and Streaming BasicsDStreams (Discretized Steams),
Architecture, Stateless, Stateful, and Windowed Transformations
Spark Streaming API
Programming and Transformations
Performance Characteristics and Tuning
The Spark UINarrow vs. Wide Dependencies
Minimizing Data Processing and Shuffling
Using Caching
Using Broadcast Variables and Accumulators
Spark GraphX Overview
IntroductionConstructing Simple Graphs
GraphX API
Shortest Path Example
MLLib Overview
IntroductionFeature Vectors
Clustering / Grouping, K-Means
Recommendations
Classifications
Apache SparkSpark Programming