This two-day course is designed to teach developers how to implement data processing pipelines and analytics using Apache Spark. Developers will use hands-on exercises to learn the Spark Core, SQL/DataFrame, Streaming, and MLlib (machine learning) APIs. Developers will also learn about Spark internals and tips for improving application performance. Additional coverage includes integration with Mesos, Hadoop, and Reactive frameworks like Akka.
3200 PLN + VAT
Chosen from VirtusLab/Lightbend team.
This two-day course by Dean Wampler, Ph.D., is designed to teach developers how to implement data processing pipelines and analytics using Apache Spark. Developers will use hands-on exercises to learn the Spark Core, SQL/DataFrame, Streaming, and MLlib (machine learning) APIs. Developers will also learn about Spark internals and tips for improving application performance. Additional coverage includes integration with Mesos, Hadoop, and Reactive frameworks like Akka.
- experience with Scala, such as completion of Fast Track to Scala course;
- experience with SQL, machine learning, and other Big Data tools will be helpful, but not required;
- laptop with installed JDK 7 or above, Typesafe Activator and Scala IDE, Intellij IDEA with Scala plugin, or a programmer’s text editor of your choice.
After having participated in this course you should:
- understand how to use the Spark Scala APIs to implement various data analytics algorithms for offline (batch-mode) and event-streaming applications;
- understand Spark internals;
- understand Spark performance considerations;
- understand how to test and deploy Spark applications;
- understand the basics of integrating Spark with Mesos, Hadoop, and Akka.
- introduction – Why Spark:
- how Spark improves on Hadoop MapReduce;
- the core abstractions in Spark
- what happens during a Spark job?
- the Spark ecosystem;
- deployment options;
- references for more information;
- Spark’s Core API:
- Resilient Distributed Datasets (RDD) and how they implement your job;
- using the Spark Shell (interpreter) vs submitting Spark batch jobs;
- using the Spark web console;
- reading and writing data files;
- working with structured and unstructured data;
- building data transformation pipelines;
- Spark under the hood: caching, checkpointing, partitioning, shuffling, etc.
- mastering the RDD API;
- broadcast variables, accumulators;
- Spark SQL and DataFrames:
- working with the DataFrame API for structured data;
- working with SQL;
- performance optimizations;
- support for JSON and Parquet formats;
- integration with Hadoop Hive;
- processing events with Spark Streaming:
- working with time slices, “mini-batches”, of events;
- working with moving windows of mini-batches;
- reuse of code in batch-mode and streaming: the Lambda Architecture;
- working with different streaming sources: sockets, file systems, Kafka, etc.
- resiliency and fault tolerance considerations;
- stateful transformations (e.g., running statistics);
- other Spark-based Libraries:
- MLlib for machine learning;
- discussion of GraphX for graph algorithms, Tachyon for distributed caching, and BlinkDB for approximate queries;
- deploying to clusters:
- Spark’s clustering abstractions: cluster vs. client deployments, coarse-grained and fine-grained process management;
- Standalone mode;
- Hadoop YARN;
- Cassandra rings;
- using Spark with the Typesafe Reactive Platform:
- Akka Streams and Spark Streaming;