Apache Spark and Scala Training

Unlock the potential of Big Data with Apache Spark

  • 24 hours of Instructor-led training
  • Comprehensive and hands-on learning
  • Master the concepts of the Apache Spark framework
  • Learn about Apache Spark Core, Spark Internals, RDD, SparkSQL etc
Group Discount
HRDF Claimable

What You Will Learn

Prerequisites

Knowledge of Big Data and Hadoop will be an advantage.

KnowledgeHut Experience

Instructor-led Live Classroom

Interact with instructors in real-time— listen, learn, question and apply. Our instructors are industry experts and deliver hands-on learning.

Curriculum Designed by Experts

Our courseware is always current and updated with the latest tech advancements. Stay globally relevant and empower yourself with the latest training!

Learn through Doing

Learn theory backed by practical case studies, exercises, and coding practice. Get skills and knowledge that can be effectively applied.

Mentored by Industry Leaders

Learn from the best in the field. Our mentors are all experienced professionals in the fields they teach.

Advance from the Basics

Learn concepts from scratch, and advance your learning through step-by-step guidance on tools and techniques.

Code Reviews by Professionals

Get reviews and feedback on your final projects from professional developers.

Curriculum

Learning Objectives: Understand Big Data and its components such as HDFS. You will learn about the Hadoop Cluster Architecture. You will also get an introduction to Spark and the difference between batch processing and real-time processing.

Topics:

  • What is Big Data?
  • Big Data Customer Scenarios                  
  • What is Hadoop?                          
  • Hadoop’s Key Characteristics                  
  • Hadoop Ecosystem and HDFS                 
  • Hadoop Core Components                      
  • Rack Awareness and Block Replication               
  • YARN and its Advantage                           
  • Hadoop Cluster and its Architecture                    
  • Hadoop: Different Cluster Modes                        
  • Big Data Analytics with Batch & Real-time Processing                   
  • Why Spark is needed?               
  • What is Spark?              
  • How Spark differs from other frameworks?     

Hands-on:

Scala REPL Detailed Demo

Learning Objectives: Learn the basics of Scala that are required for programming Spark applications. Also learn about the basic constructs of Scala such as variable types, control structures, collections such as Array, ArrayBuffer, Map, Lists, and many more.

Topics:

  • What is Scala?
  • Why Scala for Spark?                  
  • Scala in other Frameworks                       
  • Introduction to Scala REPL                        
  • Basic Scala Operations               
  • Variable Types in Scala               
  • Control Structures in Scala                       
  • Foreach loop, Functions and Procedures                           
  • Collections in Scala- Array                         
  • ArrayBuffer, Map, Tuples, Lists, and more        

Hands-on:

Scala REPL Detailed Demo

Learning Objectives: Learn about object-oriented programming and functional programming techniques in Scala.

Topics

  • Variables in Scala
  • Methods, classes, and objects in Scala               
  • Packages and package objects               
  • Traits and trait linearization                     
  • Java Interoperability                   
  • Introduction to functional programming                            
  • Functional Scala for the data scientists               
  • Why functional programming and Scala are important for learning Spark?                      
  • Pure functions and higher-order functions                       
  • Using higher-order functions                  
  • Error handling in functional Scala                           
  • Functional programming and data mutability   

Hands-on:

OOPs Concepts- Functional Programming

Learning Objectives: Learn about the Scala collection APIs, types and hierarchies. Also, learn about performance characteristics.

Topics

  • Scala collection APIs
  • Types and hierarchies                
  • Performance characteristics                    
  • Java interoperability                   
  • Using Scala implicit

Learning Objectives: Understand Apache Spark and learn how to develop Spark applications.

Topics:

  • Introduction to data analytics
  • Introduction to big data                            
  • Distributed computing using Apache Hadoop                  
  • Introducing Apache Spark                        
  • Apache Spark installation                         
  • Spark Applications                       
  • The back bone of Spark – RDD               
  • Loading Data                  
  • What is Lambda                            
  • Using the Spark shell                  
  • Actions and Transformations                  
  • Associative Property                  
  • Implant on Data                            
  • Persistence                    
  • Caching                            
  • Loading and Saving data               

Hands-on:

  • Building and Running Spark Applications
  • Spark Application Web UI
  • Configuring Spark Properties

Learning Objectives: Get an insight of Spark - RDDs and other RDD related manipulations for implementing business logic (Transformations, Actions, and Functions performed on RDD).

Topics

  • Challenges in Existing Computing Methods
  • Probable Solution & How RDD Solves the Problem                       
  • What is RDD, Its Operations, Transformations & Actions                           
  • Data Loading and Saving Through RDDs              
  • Key-Value Pair RDDs                   
  • Other Pair RDDs, Two Pair RDDs                            
  • RDD Lineage                   
  • RDD Persistence                           
  • WordCount Program Using RDD Concepts                        
  • RDD Partitioning & How It Helps Achieve Parallelization              
  • Passing Functions to Spark           

Hands-on:

  • Loading data in RDD
  • Saving data through RDDs
  • RDD Transformations
  • RDD Actions and Functions
  • RDD Partitions
  • WordCount through RDDs

Learning Objectives: Learn about SparkSQL which is used to process structured data with SQL queries, data-frames and datasets in Spark SQL along with different kinds of SQL operations performed on the data-frames. Also, learn about the Spark and Hive integration.

Topics

  • Need for Spark SQL
  • What is Spark SQL?                      
  • Spark SQL Architecture              
  • SQL Context in Spark SQL                         
  • User Defined Functions                            
  • Data Frames & Datasets                            
  • Interoperating with RDDs                         
  • JSON and Parquet File Formats              
  • Loading Data through Different Sources                            
  • Spark – Hive Integration       

Hands-on:

  • Spark SQL – Creating Data Frames
  • Loading and Transforming Data through Different Sources
  • Spark-Hive Integration

Learning Objectives: Learn why machine learning is needed, different Machine Learning techniques/algorithms, and SparK MLlib.

Topics

  • Why Machine Learning?
  • What is Machine Learning?                      
  • Where Machine Learning is Used?                       
  • Different Types of Machine Learning Techniques                          
  • Introduction to MLlib                 
  • Features of MLlib and MLlib Tools                        
  • Various ML algorithms supported by MLlib
  • Optimization Techniques    

Learning Objectives: Implement various algorithms supported by MLlib such as Linear Regression, Decision Tree, Random Forest and so on

Topics

  • Supervised Learning - Linear Regression, Logistic Regression, Decision Tree, Random Forest
  • Unsupervised Learning - K-Means Clustering

Hands-on:

  • Machine Learning MLlib
  • K- Means Clustering
  • Linear Regression
  • Logistic Regression
  • Decision Tree
  • Random Forest

Learning Objectives: Understand Kafka and its Architecture. Also, learn about Kafka Cluster, how to configure different types of Kafka Clusters. Get introduced to Apache Flume, its architecture and how it is integrated with Apache Kafka for event processing. At the end, learn how to ingest streaming data using flume.

Topics

  • Need for Kafka
  • What is Kafka?              
  • Core Concepts of Kafka             
  • Kafka Architecture                      
  • Where is Kafka Used?                
  • Understanding the Components of Kafka Cluster                         
  • Configuring Kafka Cluster                         
  • Kafka Producer and Consumer Java API             
  • Need of Apache Flume             
  • What is Apache Flume?             
  • Basic Flume Architecture                          
  • Flume Sources              
  • Flume Sinks                    
  • Flume Channels                            
  • Flume Configuration                   
  • Integrating Apache Flume and Apache Kafka     

Hands-on:    

  • Configuring Single Node Single Broker Cluster
  • Configuring Single Node Multi Broker Cluster
  • Producing and consuming messages
  • Flume Commands
  • Setting up Flume Agent

Learning Objectives: Learn about the different streaming data sources such as Kafka and Flume. Also learn to create a spark streaming application.

Topics

  • Apache Spark Streaming: Data Sources
  • Streaming Data Source Overview                         
  • Apache Flume and Apache Kafka Data Sources     

Hands-on:

Perform Twitter Sentimental Analysis Using Spark Streaming

    Learning Objectives: Learn the key concepts of Spark GraphX programming and operations along with different GraphX algorithms and their implementations.

Topics

  • A brief introduction to graph theory
  • GraphX             
  • VertexRDD and EdgeRDD                         
  • Graph operators                          
  • Pregel API                       
  • PageRank      

Project

Adobe Analytics

Adobe Analytics processes billions of transactions a day across major web and mobile properties. In recent years they have modernised their batch processing stack by adopting new technologies like Hadoop, MapReduce, Spark etc. In this project we will see how Spark and Scala are useful in refactoring process. Spark allows you to define arbitrarily complex processing pipelines without the need for external coordination. It also has support for stateful streaming aggregations and we could reduce our latency using micro-batches of seconds instead of minutes. With help of Scala and Spark we are able to perform a wide range of operations like batch, streaming, stateful aggregations and analytics, and ETL jobs, just to name a few.

Interactive Analytics

Apache Spark has many features like, Fog computing, IOT and MLib, GraphX etc. Among the most notable features of Apache Spark is its ability to support interactive analysis. Unlike MapReduce that supports batch processing, Apache Spark processes data faster because of which it can process exploratory queries without sampling. Spark provides an easy way to study API and also it is a strong tool for interactive data analysis. It is available in Scala. MapReduce is made to handle batch processing and SQl on Hadoop engines which are usually slow. Hence it is fast to perform any identification queries against live data without sampling and is highly interactive. Structured streaming is also a new feature that helps in web analytics by allowing customers run user-friendly query with web visitors.

Personalizing news pages for Web visitors in Yahoo

Various Spark projects are running in Yahoo for different applications. For personalizing news pages, Yahoo uses ML algorithms which runs on Spark to figure out what individual users are interested in, and also to categorize news stories as they arise to figure out what types of users would be interested in reading them. To do this, Yahoo wrote a Spark ML algorithm 120 lines of Scala. (Previously, its ML algorithm for news personalization was written in 15,000 lines of C++.) With just 30 minutes of training on a large, hundred million record data set, the Scala ML algorithm was ready for business.

reviews on our popular courses

The instructor was great. He was able to cater and pivot the course based on the attendee's discussion and was able to demonstrate and provide real-world examples.

Review image

Kate Johanson

Director
Attended Leading SAFe® 4.6 Certification workshop in March 2018

It very interactive learning session that enhanced my understanding of the subject!

Review image

Anup Kumar Dash

Project Manager
Attended Certified ScrumMaster®(CSM) workshop in November 2018

A well-organized workshop with a highly knowledgeable trainer. I highly recommended it as the best place to learn about Agile & Scrum and for the  CSM certification.

Review image

Manash Hazarika

Sr. AQM
Attended Certified ScrumMaster®(CSM) workshop in September 2018

The trainer was excellent. The session was interactive with relevant materials. The training venue and other logistic arrangements were handled in a very professional way!

Review image

Samagata Das

Sr. Manager
Attended Certified ScrumMaster®(CSM) workshop in July 2018

FAQs

The Course

Apache Spark is one of the ‘trending’ courses right now. Its myriad advantages including fast data processing, cheaper costs at adoption, and easy compatibility with other platforms have made it among the fastest technologies to be adopted for Big Data analytics. And considering that the demand for Data Analysts is hitting the roof, pursuing a course in Apache Scala and making a career in Data Analytics will be a most lucrative career decision for you. We bring you a well-rounded Apache Spark and Scala online tutorial that will hand hold you through the fundamentals of this technology and its use in Big Data Analytics. Through loads of exercises and hands on tutorials, we’ll ensure that you are well versed with Spark and Scala.

You will:

  • Master the concepts of the Apache Spark framework, and its deployment methodologies
  • Understand the Spark Internals RDD and use of Spark’s API and Scala functions to create RDDs and transform RDDs
  • Master the RDD Combiners, SparkSQL, Spark Context, Spark Streaming, MLlib, and GraphX

Big data explosion has created huge avenues for data analysis and has made it the most sought after career option. There is a huge demand for developers and engineers who can use tools such as Scala and Spark to derive business insights. This course will prepare you for everything you need to learn about Big Data while gaining practical experience on Scala and Spark.  After completing our course, you will become proficient in Apache Spark Development.

There are no restrictions but participants would benefit if they have basic computer knowledge.

Yes, KnowledgeHut offers this training online.

Your instructors are Apache Spark experts who have years of industry experience.

Finance Related

Any registration canceled within 48 hours of the initial registration will be refunded in FULL (please note that all cancellations will incur a 5% deduction in the refunded amount due to transactional costs applicable while refunding) Refunds will be processed within 30 days of receipt of the written request for refund. Kindly go through our Refund Policy for more details: https://www.knowledgehut.com/refund-policy

KnowledgeHut offers a 100% money back guarantee if the candidate withdraws from the course right after the first session. To learn more about the 100% refund policy, visit our Refund Policy.

The Remote Experience

In an online classroom, students can log in at the scheduled time to a live learning environment which is led by an instructor. You can interact, communicate, view and discuss presentations, and engage with learning resources while working in groups, all in an online setting. Our instructors use an extensive set of collaboration tools and techniques which improves your online training experience.

Minimum Requirements:

  • Operating system such as Mac OS X, Windows or Linux
  • A modern web browser such as FireFox, Chrome
  • Internet Connection

Have More Questions?