Search

Apache Spark Use Cases & Applications

Apache Spark was developed by a team at UC Berkeley in 2009. Since then, Apache Spark has seen a very high adoption rate from top-notch technology companies like Google, Facebook, Apple, Netflix etc. The demand has been ever increasing day by day. According to marketanalysis.com survey, the Apache Spark market worldwide will grow at a CAGR of 67% between 2019 and 2022. The Spark market revenue is zooming fast and may grow up $4.2 billion by 2022, with a cumulative market valued at $9.2 billion (2019 - 2022).As per Apache, “Apache Spark is a unified analytics engine for large-scale data processing”.Spark is a cluster computing framework, somewhat similar to MapReduce but has a lot more capabilities, features, speed and provides APIs for developers in many languages like Scala, Python, Java and R. It is also friendly for database developers as it provides Spark SQL which supports most of the ANSI SQL functionality. Spark also has out of the box support for Machine learning and Graph processing using components called MLlib and GraphX respectively. Spark also has support for streaming data using Spark Streaming.Spark is developed in Scala programming language. Though the majority of use cases of Spark uses HDFS as the underlying data file storage layer, it is not mandatory to use HDFS. It does work with a variety of other Data sources like Cassandra, MySQL, AWS S3 etc. Apache Spark also comes with its default resource manager which might be good enough for the development environment and small size cluster, but it also integrates very well with YARN and Mesos. Most of the production-grade and large clusters use YARN and Mesos as the resource manager.Features of SparkSpeed: According to Apache, Spark can run applications on Hadoop cluster up to 100 times faster in memory and up to 10 times faster on disk. Spark is able to achieve such a speed by overcoming the drawback of MapReduce which always writes to disk for all intermediate results. Spark does not need to write intermediate results to disk and can work in memory using DAG, lazy evaluation, RDDs and caching. Spark has a highly optimized execution engine which makes it so fast. Fault Tolerance: Spark’s optimized execution engine not only makes it fast but is also fault tolerant. It achieves this using abstraction layer called RDD (Resilient Distributed Datasets) in combination with DAG, which is built to handle failures of tasks or even node failures. Lazy Evaluation: Spark works on lazy evaluation technique. This means that the processing(transformations) on Spark RDD/Datasets are evaluated in a lazy manner, i.e. the output RDDs/datasets are not available after transformation will be available only when needed i.e. when any action is performed. The transformations are just part of the DAG which gets executed when action is called.Multiple Language Support: Spark provides support for multiple programming languages like Scala, Java, Python, R and also Spark SQL which is very similar to SQL.Reusability: Spark code once written for batch processing jobs can also be utilized for writing processed on Stream processing and it can be used to join historical batch data and stream data on the fly.Machine Learning: MLlib is a Machine Learning library of Spark. which is available out of the box for creating ML pipelines for data analysis and predictive analytics alsoGraph Processing: Apache Spark also has Graph processing logic. Using GraphX APIs which is again provided out of the box one can write graph processing and do graph-parallel computation.Stream Processing and Structured Streaming: Spark can be used for batch processing and also has the capability to cater to stream processing use case with micro batches. Spark Streaming comes with Spark and one does not need to use any other streaming tools or APIs. Spark streaming also supports Structure Streaming. Spark streaming also has in-built connectors for Apache Kafka which comes very handy while developing Streaming applications.Spark SQL: Spark has an amazing SQL support and has an in-built SQL optimizer. Spark SQL features are used heavily in warehouses to build ETL pipelines.Spark is being used in more than 1000 organizations who have built huge clusters for batch processing, stream processing, building warehouses, building data analytics engine and also predictive analytics platforms using many of the above features of Spark. Let’s look at some of the use cases in a few of these organizations.What are the different Apache Spark applications?Streaming Data: Streaming is basically unstructured data produced by different types of data sources. The data sources could be anything like log files generated while customers using mobile apps or web applications, social media contents like tweets, facebook posts, telemetry from connected devices or instrumentation in data centres. The streaming data is usually unbounded and is being processed as received from the data source.Then there is Structured streaming which works on the principle of polling data in intervals and then this interval data is processed and appended or updated to the unbounded result table.Apache Spark has a framework for both i.e. Spark Streaming to handle Streaming using micro batches and DStreams and Structured Streaming using Datasets and Data frames.Let us try to understand Spark Streaming from an example.Suppose a big retail chain company wants to get a real-time dashboard to keep a close eye on its inventory and operations. Using this dashboard the management should be able to track how many products are being purchased, shipped and delivered to customers.Spark Streaming can be an ideal fit here.The order management system pushes the order status to the queue(could be Kafka) from where Streaming process reads every minute and picks all the orders with their status. Then Spark engine processes these and emits the output status count. Spark streaming process runs like a daemon until it is killed or error is encountered.Machine learning:As defined by Arthur Samuel in 1959, “Machine Learning is the] field of study that gives computers the ability to learn without being explicitly programmed”. In 1997, Tom Mitchell gave a definition which is more specifically from an engineering perspective, “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.”. ML solves complex problems that could not be solved with just mathematical numerical methods or means. ML is not supposed to make perfect guesses. In ML’s domain, there is no such thing. Its goal is to make a prediction or make guesses which are good enough to be useful.MLlib is the Apache Spark’s scalable machine learning library. MLlib has multiple algorithms for Supervised and Unsupervised ML which can scale out on a cluster for classification, regression, clustering, collaborative filtering. MLlib interoperates with Python’s math/numerical analysis library NumPy and also with R’s libraries. Some of these algorithms are also applicable to streaming data. MLlib helps Spark provide sentiment analysis, customer segmentation and predictive intelligence.A very common use case of ML is text classification, say for categorising emails. An ML pipeline can be trained to classify emails by reading an Inbox. A typical ML pipeline looks like this. ML is a subject in itself so it is not possible to deep dive here.Fog computing: Fog Computing is another use case of Apache Spark. To understand Fog computing we need to understand IoT first. IoT basically connects all our devices so that they can communicate with each other and provide solutions to the users of those devices. This would mean huge amounts of data and current cloud computing may not be sufficient to cater to so much data transfer, data processing and online demand of customer’s request.Fog computing can be ideal here as it takes the work of processing to the devices on the edge of the network. This would need very low latency, parallel processing of ML and complex graph analytical algorithms, all of which are readily available in Apache spark out of the box and can be pick and choose as per the requirements of the processing. So it is expected that as IoT gains momentum Apache spark will be the leader in Fog computing.Event Detection:Apache Spark is increasingly used in event detection like credit card fraud detection, money laundering activities etc. Apache spark streaming along with MLlib and Apache Kafka forms the backbone of a fraud financial transaction detection.Credit card transactions of a cardholder can be captured over a period of time to categorize user’s spending habits. Models can be developed and trained to predict any anomaly in the card transaction and along with Spark streaming and Kafka in real time.Interactive Analysis:Spark’s one of the most popular features is its ability to provide users with interactive analytics. MapReduce does provide tools like Pig and Hive for interactive analysis, but they are too slow in most of the cases. But Spark is very fast and swift and that’s why it has gained so much ground in the interactive analysis.Spark interfaces with programming languages like R, Python, SQL and Scala which caters to a bigger set of developers and users for interactive analysis.Spark also came up with Structured Streaming in version 2.0 which can be used for interactive analysis with live data as well as join the live data with batch data output to get more insight into the data. Structured streaming in future has the potential to boost Web Analytics by allowing users to query user’s live web session. Even machine learning can be applied to live session data for more insights.Data Warehousing: Data warehousing is another function where Apache Spark has is getting tremendous traction. Due to an increasing volume of data day by day, the tradition ETL tools like Informatica along with RDBMS are not able to meet the SLAs as they are not able to scale horizontally. Spark along with Spark SQL is being used by many companies to migrate to Big Data based Warehouse which can scale horizontally as the load increases.With Spark, even the processing can be scaled horizontally by adding machines to the Spark engine cluster.These migrated applications embed the Spark engine and offer a web UI to allow users to create, run, test and deploy jobs interactively. Jobs are primarily written in native Spark SQL or other flavours of SQL. These Spark clusters have been able to scale to process many terabytes of data every day and the clusters can be hundreds to thousands of nodes.Companies using Apache SparkApache Spark at Alibaba:Alibaba is the world’s one of the biggest e-commerce players. Alibaba’s online shopping platform generates Petabytes of data as it has millions of users every day doing searches, shopping and placing orders. These user interactions are represented as complex graphs. The processing of these data points is done using Spark’s Machine learning component MLlib and then used to provide better user shopping experience by suggesting products based on choice, trending products, reviews etc.Apache Spark at MyFitnessPal:MyFitnessPal is one of the largest health and fitness lifestyle portals. It has over 80 million active users. The portal helps its users follow and achieve a healthy lifestyle by following a proper diet and fitness regime. The portal uses the data added by users about their food, exercise and lifestyles to identify the best quality food and effective exercise. Using Spark the portal is able to scan through the huge amount of structured and unstructured data and pull out best suggestions for its users.Apache Spark at TripAdvisor:TripAdvisor has a huge user base and generates a mammoth amount of data every day. It is one of the biggest names in the Travel and Tourism industry. It helps users plan their personal and official trips around the world. It uses Apache Spark to process petabytes of data from user interactions and destination details and gives recommendations on planning a perfect trip based on users choice and preferences. They help users identify best airlines, best prices on hotels and airlines, best places to eat, basically everything needed to plan any trip. It also ranks these places, hotels, airlines, restaurants based on user feedback and reviews. All this processing is done using Apache SparkApache Spark at Yahoo:Yahoo is known to have one of the biggest Hadoop Cluster and everyone is aware of Yahoo’s contribution to the development of Big Data system. Yahoo is also heavily using Apache Spark Machine learning capabilities to identify topics and news which users are interested in. This is similar to trending tweets or hashtags on Twitter or Facebook. Earlier these Machine Learning algo were developed in C/C++ with thousands of lines of code. While today with Spark and Scala/Pythons these algorithms can be implemented in few hundreds of lines of code. This is a big leap in turnover time as well as code understanding and maintenance. This has been made possible due to Spark to a great extent.Apache Spark Use casesFinance: Spark is used in Finance industry across different functional and technology domains.A typical use case is building a Data Warehouse for batch processing and daily reporting. The Spark data frames abstraction has been used as a generic ingestion platform capable of ingesting data from multiple sources of different formats.Financial services companies also use Apache Spark MLlib to create and train models for fraud detection. Some of the banks have started using Spark as a tool for classifying text in money transfers.Some of the companies use Apache spark as log collection, an analysis engine and detection engine.Let’s look at Spain's 2nd biggest bank BBVA use case where every money transfer a customer makes goes through an engine that infers a category from its textual description. This engine has been developed in Spark, mixes MLLib and own implementations, and is currently into production serving more than 5M customers daily.The challenges that the BBVA technology team faced while building this ML were many:They did not know the data source in advanceThey did not have a labelled setA fraction of texts is useless (detection rather than classification)Distribution of categories is imbalancedPrefer false negatives over false positivesVery short text, language not even syntactically correctThe engineers solved these problems using the Spark MLlib pipeline using some other NLP tools like word2vec.TF-IDF features + linear classifier (98% precision, 21% recallFurther tests with word2vec + Vector of Locally Aggregated Descriptors (VLAD)Implemented in Spark/Scala, using MLlib classesOwn classes implemented for Multi-class Logistic Regression, VLADScala dependency injection useful to quickly setup variants of the above stepsHealthCare:Healthcare industry is the newest in adopting advanced technologies like big data and machine learning to provide hi-tech facilities to their patients. Apache Spark is penetrating fast and is becoming the heartbeat in the latest Healthcare applications. Hospitals use these Spark enabled healthcare applications to analyze patients medical history to identify possible health issues based on history and learning.Also, healthcare produces massive amounts of data and to process so much of the data in quick time and provide insights based on that itself was a challenge which Spark solves with ease.Another very interesting problem in hospitals is when working with Operating Room(OR) scheduling within a hospital setting is that it is difficult to schedule and predict available OR block times. This leads to empty and unused operating rooms leading to longer waiting times for patients for their procedures.Let’s see a use case. For a basic surgical procedure, it costs around $15-20 per minute. So, OR is a scarce and valuable resource and it needs to be utilized carefully and optimally. OR efficiency differs depending on the OR staffing and allocation, not the workload. So the loss of efficiency means a loss for the patient. So time and management are the utmost importance here.Spark and MLlib solve the problem by developing a predictive model that would identify available OR time 2 weeks in advance, allows hospitals to confirm waitlist cases two weeks in advance instead of when blocks normally release 4 days out. This OR Scheduling can be done by getting the historical data and running then linear regression model with multiple variables.This model works because:Can coordinate waitlist scheduling logistics with physicians and patients within 2 weeks of surgery.Plan staff scheduling and resources so there are less last-minutes staffing issues for nursing and anaesthesiaUtilization metrics show where elective surgical schedule and level demand can be maximized.Retail: Big retail chains have this usual problem of optimising their supply chain to minimize cost and wastage, improve customer service and gain insights into customer’s shopping behaviour to serve better and in the process optimize their profit.To achieve these goals these retail companies have a lot of challenges like to keep the inventory up to date based on sales and also to predict sales and inventory during some promotional events and sale seasons. Also, they need to keep a track on customer’s orders transit and delivery. All these pose huge technical challenges. Apache Spark and MLlib is being used by a lot of these companies to capture real-time sales and invoice data, ingest it and then figure out the inventory. The technology can also be used to identify in real-time the order’s transit and delivery status. Spark MLlib analytics and predictive models are being used to predict sales during promotions and sale seasons to match the inventory and be ready for the event. The historical data on customer’s buying behaviour is also used to provide the customer with personalized suggestions and improve customer satisfaction. A lot of stores have started using sensors to get data on customer’s location within the store, their preferences, shopping behaviour, etc to provide on-the-spot suggestions and help to find, buy a product by sending messages, using displays etc.Travel: Airline customer segmentation is a challenging field to understand due to customer’s complex behaviour. Amadeus is one of the main IT solution providers in the airline industry. It has the resources and infrastructure to manage all the ticketing and booking data as well as understanding the Airline needs and market particularities. By combining different data sources produced by different airline systems, they have applied unsupervised machine learning techniques to improve our understanding of customer behaviour.Challenges in the airline industry are to understand the health of the business:Are any segments growing or shrinkingHow is the yield developingTune marketing to specific interests within segmentsOptimize product offers using fare structures and media offersTraditional approaches for segmentation were based on business intuition and manually crafter rules set. But these approaches have limitations and prejudices which can sometimes be negative for the business. On the contrary, the data-driven approach is resilient against turn-over, prejudices and market change.With a data-driven approach and using Spark and MLlib, the model is able to extract actionable insights on typical customer behaviour and intentions. Supervised and supervised learning using Spark MLlib techniques at scale are used to train models for prediction. These are then used to assist the customer in deploying the newfound insights into day-to-day operations.Media: Media companies Netflix, Hotstar etc are using Apache Spark at the heart of their technology engine to drive their business. When a user turns on Netflix, he is able to see his favourite content playing automatically. This is achieved through recommendation engines built on Machine learning algorithms and Spark MLlib. Netflix uses historical data from users content selection, trains its ML algorithms, tests it offline and then deploys it live and checks if it works in Production as well.Netflix has built an engine something called Time Travel using Apache Spark and other big data technologies to: Snapshot online services and use the snapshot data offline to generate features and share facts and features between experiments without calling live systems.If someone is interested in exploring the details of the use case, one can look at the below link:Energy: Apache Spark is spreading its roots everywhere. A common man not related to software industry may not realise it but there are applications running or extracting data from his home environment and processed in Spark to make his life better and easier. An example we will discuss below is the British Gas.British Gas is a 200-year-old company. Connected Homes is BG’s IoT “startup”. It is a leader in the UK’s connected home market. Connected Homes is trying to predict the usage consumption patterns of the electricity, gas at the homes and provide consumers with insights so they can smartly use their devices and reduce energy consumption and save energy and money. Connected homes use Apache Spark at the core of its Data Engineering and ML engine.The challenges are there are millions of electric and gas meters and the meters are read every 30 minutes.There are:Gas and electricity meter readingsThermostat temperature dataConnected boiler dataReal-time energy consumption dataIntroducing motion sensors, window and door sensors, etcApache Spark MLlib is used to apply machine learning to these data for disaggregation, similar home comparison and smart meters used in indirect algorithms for non-smart customers.The analytics engine is used to show customers how they have spent energy, what are their top 3 spends, how can they reduce their energy consumption by showing patterns from smart consumers and smart meters etc. This gives customers a lot of insight and educates customers on optimally using energy at their homes.Gaming:Online Gaming industry is another beneficiary of the Apache Spark technology.Riot Games uses Spark for Combating abusive language in chat in the team games. The challenges in online gaming are:1% of all players are consistently unsportsmanlike2% of all games infected by serious toxicityIn-Game Toxicity 95% of all serious toxicity comes from players who are otherwise sportsmanlikeTo solve this the game developers tried to predict the words used by the gamers in the context of the game or the scenario. They used the “Word2Vec” a neural model which has 256 dimensions embedding months of chat logs. Each word in the chat is document split in spaces and lowercase. The model was trained on NLP for acronyms, short forms, colloquial words etc and the deviations could be huge. The team built a model trained to predict bad/toxic language. The gaming company has 100+ million users every month and so the data is huge. They used Spark MLlib to train their models using different algorithms, one of them Logistic Regression Random Forest Gradient Boosted Trees. The results were impressive for them as they tuned their models for better precision.Benefits of having Apache Spark for Individual companiesMany of the companies across industries have been benefiting from Apache Spark. The reasons could be different:Speed of executionMulti-language supportMachine learning libraryGraph processing libraryBatch processing as well as Stream & Structured stream processingApache Spark is beneficial for small as well as large enterprise. Spark offers a complete solution to many of the common problems like ETL and warehousing, Stream data processing, common use case of supervised and unsupervised learning for data analytics and predictive modelling. So with Apache Spark, the technology team does not require to look out for different technology stack and multiple vendors for a solution. This reduces the learning curve for additional development and maintenance. Also, since Spark has support for multiple languages Scala, Java, Python & R, it is easy to find developers.Limitations:Though there are so many benefits of Apache Spark as we have seen above, there are few limitations which Apache Spark has. We should be aware of these limitations before we decide to adopt any technology.Apache Spark does not come with an inbuilt file system and it has to depend on HDFS in most of the use cases. If not, it has to be used with some cloud-based data platform.Even though Spark has Stream processing feature it is not exactly real-time processing. It processes in batches which are called micro-batches.Apache Spark is expensive as it catches a lot of data and memory is not cheap.Spark faces issues while working with HDFS which has a very large number of small files.Though Spark MLlib provides machine learning capabilities, it does not come with a very exhaustive list of algorithms. It can solve a lot of ML problems but not all.Apache Spark does not have automatic code optimization process in place and so the code needs to be optimized manually.Reasons why you should learn Apache SparkWe have seen the wide impact and use cases if Apache Spark. So we know that Spark has become a buzzword these days. We should now also understand why we should learn Spark.Spark offers a complete package for developers and can act as a unified analytics engine. Hence it increases the productivity for the developers. So it ROI for any firm is high and most of the companies dependent on technology are aware of the fact and also willing to put their money in Spark.Learning Spark can help explore the world of Big data and data science. Both these technology fields are the future and bringing transformational changes in almost all industries. So getting exposed to Spark is becoming a necessity for all firms.With fast-paced Spark adoption by organizations, it is opening up new prospects in business and many of the applications have proved that instead of business driving technology it is becoming vice versa now, that technology is driving business.According to a survey, there is a huge demand for Spark engineers. Today, there are well over 1,000 contributors to the Apache Spark project across 250+ companies worldwide. Recently, Indeed.com listed over 2,400 full-time open positions for Apache Spark professionals across various industries including enterprise technology, e-commerce/retail, healthcare, and life sciences, oil and gas, manufacturing, and more.Apache Spark developers earn the highest average salary among all other programmers. So this is another and one of the major incentives one can get to learn and expertise Spark.One can look at an old survey by Databricks to understand the importance and impact of Apache Spark by 2016. By 2019 these number would have grown much bigger.ConclusionApache Spark has capabilities to process huge amount of data in a very efficient manner with high throughput. It can solve problems related to batch processing, near real-time processing, can be used to apply lambda architecture, can be used for Structured streaming. Also, it can solve many of the complex data analytics and predictive analytics problems with the help of the MLlib component which comes out of the box. Apache Spark has been making a big impact on the whole data engineering and data science gamut at scale.
Rated 4.5/5 based on 1 customer reviews

Apache Spark Use Cases & Applications

7K
  • by Nitin Kumar
  • 04th Jun, 2019
  • Last updated on 18th Jun, 2019
  • 8 mins read
Apache Spark Use Cases & Applications

Apache Spark was developed by a team at UC Berkeley in 2009. Since then, Apache Spark has seen a very high adoption rate from top-notch technology companies like Google, Facebook, Apple, Netflix etc. The demand has been ever increasing day by day. According to marketanalysis.com survey, the Apache Spark market worldwide will grow at a CAGR of 67% between 2019 and 2022. The Spark market revenue is zooming fast and may grow up $4.2 billion by 2022, with a cumulative market valued at $9.2 billion (2019 - 2022).

As per Apache, “Apache Spark is a unified analytics engine for large-scale data processing”.

Spark is a cluster computing framework, somewhat similar to MapReduce but has a lot more capabilities, features, speed and provides APIs for developers in many languages like Scala, Python, Java and R. It is also friendly for database developers as it provides Spark SQL which supports most of the ANSI SQL functionality. Spark also has out of the box support for Machine learning and Graph processing using components called MLlib and GraphX respectively. Spark also has support for streaming data using Spark Streaming.

Spark is developed in Scala programming language. Though the majority of use cases of Spark uses HDFS as the underlying data file storage layer, it is not mandatory to use HDFS. It does work with a variety of other Data sources like Cassandra, MySQL, AWS S3 etc. Apache Spark also comes with its default resource manager which might be good enough for the development environment and small size cluster, but it also integrates very well with YARN and Mesos. Most of the production-grade and large clusters use YARN and Mesos as the resource manager.

Features of SparkFeatures of Spark

  1. SpeedAccording to Apache, Spark can run applications on Hadoop cluster up to 100 times faster in memory and up to 10 times faster on disk. Spark is able to achieve such a speed by overcoming the drawback of MapReduce which always writes to disk for all intermediate results. Spark does not need to write intermediate results to disk and can work in memory using DAG, lazy evaluation, RDDs and caching. Spark has a highly optimized execution engine which makes it so fast.
  2.  Fault Tolerance: Spark’s optimized execution engine not only makes it fast but is also fault tolerant. It achieves this using abstraction layer called RDD (Resilient Distributed Datasets) in combination with DAG, which is built to handle failures of tasks or even node failures.
  3.  Lazy Evaluation: Spark works on lazy evaluation technique. This means that the processing(transformations) on Spark RDD/Datasets are evaluated in a lazy manner, i.e. the output RDDs/datasets are not available after transformation will be available only when needed i.e. when any action is performed. The transformations are just part of the DAG which gets executed when action is called.
  4. Multiple Language Support: Spark provides support for multiple programming languages like Scala, Java, Python, R and also Spark SQL which is very similar to SQL.
  5. Reusability: Spark code once written for batch processing jobs can also be utilized for writing processed on Stream processing and it can be used to join historical batch data and stream data on the fly.
  6. Machine Learning: MLlib is a Machine Learning library of Spark. which is available out of the box for creating ML pipelines for data analysis and predictive analytics also
  7. Graph Processing: Apache Spark also has Graph processing logic. Using GraphX APIs which is again provided out of the box one can write graph processing and do graph-parallel computation.
  8. Stream Processing and Structured Streaming: Spark can be used for batch processing and also has the capability to cater to stream processing use case with micro batches. Spark Streaming comes with Spark and one does not need to use any other streaming tools or APIs. Spark streaming also supports Structure Streaming. Spark streaming also has in-built connectors for Apache Kafka which comes very handy while developing Streaming applications.
  9. Spark SQL: Spark has an amazing SQL support and has an in-built SQL optimizer. Spark SQL features are used heavily in warehouses to build ETL pipelines.

Spark is being used in more than 1000 organizations who have built huge clusters for batch processing, stream processing, building warehouses, building data analytics engine and also predictive analytics platforms using many of the above features of Spark. Let’s look at some of the use cases in a few of these organizations.

What are the different Apache Spark applications?

Streaming Data: 

Streaming is basically unstructured data produced by different types of data sources. The data sources could be anything like log files generated while customers using mobile apps or web applications, social media contents like tweets, facebook posts, telemetry from connected devices or instrumentation in data centres. The streaming data is usually unbounded and is being processed as received from the data source.

Then there is Structured streaming which works on the principle of polling data in intervals and then this interval data is processed and appended or updated to the unbounded result table.

Apache Spark has a framework for both i.e. Spark Streaming to handle Streaming using micro batches and DStreams and Structured Streaming using Datasets and Data frames.

Let us try to understand Spark Streaming from an example.

Suppose a big retail chain company wants to get a real-time dashboard to keep a close eye on its inventory and operations. Using this dashboard the management should be able to track how many products are being purchased, shipped and delivered to customers.

Spark Streaming can be an ideal fit here.

Streaming Data for Apache

The order management system pushes the order status to the queue(could be Kafka) from where Streaming process reads every minute and picks all the orders with their status. Then Spark engine processes these and emits the output status count. Spark streaming process runs like a daemon until it is killed or error is encountered.

Machine learning:

As defined by Arthur Samuel in 1959, “Machine Learning is the] field of study that gives computers the ability to learn without being explicitly programmed”. In 1997, Tom Mitchell gave a definition which is more specifically from an engineering perspective, “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.”. ML solves complex problems that could not be solved with just mathematical numerical methods or means. ML is not supposed to make perfect guesses. In ML’s domain, there is no such thing. Its goal is to make a prediction or make guesses which are good enough to be useful.

MLlib is the Apache Spark’s scalable machine learning library. MLlib has multiple algorithms for Supervised and Unsupervised ML which can scale out on a cluster for classification, regression, clustering, collaborative filtering. MLlib interoperates with Python’s math/numerical analysis library NumPy and also with R’s libraries. Some of these algorithms are also applicable to streaming data. MLlib helps Spark provide sentiment analysis, customer segmentation and predictive intelligence.

A very common use case of ML is text classification, say for categorising emails. An ML pipeline can be trained to classify emails by reading an Inbox. A typical ML pipeline looks like this. ML is a subject in itself so it is not possible to deep dive here.

Machine learning application for Apache

Fog computing: 

Fog Computing is another use case of Apache Spark. To understand Fog computing we need to understand IoT first. IoT basically connects all our devices so that they can communicate with each other and provide solutions to the users of those devices. This would mean huge amounts of data and current cloud computing may not be sufficient to cater to so much data transfer, data processing and online demand of customer’s request.

Fog computing can be ideal here as it takes the work of processing to the devices on the edge of the network. This would need very low latency, parallel processing of ML and complex graph analytical algorithms, all of which are readily available in Apache spark out of the box and can be pick and choose as per the requirements of the processing. So it is expected that as IoT gains momentum Apache spark will be the leader in Fog computing.

  • Event Detection:Apache Spark is increasingly used in event detection like credit card fraud detection, money laundering activities etc. Apache spark streaming along with MLlib and Apache Kafka forms the backbone of a fraud financial transaction detection.
    Credit card transactions of a cardholder can be captured over a period of time to categorize user’s spending habits. Models can be developed and trained to predict any anomaly in the card transaction and along with Spark streaming and Kafka in real time.
  • Interactive Analysis:Spark’s one of the most popular features is its ability to provide users with interactive analytics. MapReduce does provide tools like Pig and Hive for interactive analysis, but they are too slow in most of the cases. But Spark is very fast and swift and that’s why it has gained so much ground in the interactive analysis.
    Spark interfaces with programming languages like R, Python, SQL and Scala which caters to a bigger set of developers and users for interactive analysis.Spark also came up with Structured Streaming in version 2.0 which can be used for interactive analysis with live data as well as join the live data with batch data output to get more insight into the data. Structured streaming in future has the potential to boost Web Analytics by allowing users to query user’s live web session. Even machine learning can be applied to live session data for more insights.
  • Data Warehousing: Data warehousing is another function where Apache Spark has is getting tremendous traction. Due to an increasing volume of data day by day, the tradition ETL tools like Informatica along with RDBMS are not able to meet the SLAs as they are not able to scale horizontally. Spark along with Spark SQL is being used by many companies to migrate to Big Data based Warehouse which can scale horizontally as the load increases.
    With Spark, even the processing can be scaled horizontally by adding machines to the Spark engine cluster.These migrated applications embed the Spark engine and offer a web UI to allow users to create, run, test and deploy jobs interactively. Jobs are primarily written in native Spark SQL or other flavours of SQL. These Spark clusters have been able to scale to process many terabytes of data every day and the clusters can be hundreds to thousands of nodes.

Companies using Apache Spark

Apache Spark at Alibaba:

Alibaba is the world’s one of the biggest e-commerce players. Alibaba’s online shopping platform generates Petabytes of data as it has millions of users every day doing searches, shopping and placing orders. These user interactions are represented as complex graphs. The processing of these data points is done using Spark’s Machine learning component MLlib and then used to provide better user shopping experience by suggesting products based on choice, trending products, reviews etc.

Apache Spark at MyFitnessPal:

MyFitnessPal is one of the largest health and fitness lifestyle portals. It has over 80 million active users. The portal helps its users follow and achieve a healthy lifestyle by following a proper diet and fitness regime. The portal uses the data added by users about their food, exercise and lifestyles to identify the best quality food and effective exercise. Using Spark the portal is able to scan through the huge amount of structured and unstructured data and pull out best suggestions for its users.

Apache Spark at TripAdvisor:

TripAdvisor has a huge user base and generates a mammoth amount of data every day. It is one of the biggest names in the Travel and Tourism industry. It helps users plan their personal and official trips around the world. It uses Apache Spark to process petabytes of data from user interactions and destination details and gives recommendations on planning a perfect trip based on users choice and preferences. They help users identify best airlines, best prices on hotels and airlines, best places to eat, basically everything needed to plan any trip. It also ranks these places, hotels, airlines, restaurants based on user feedback and reviews. All this processing is done using Apache Spark

Apache Spark at Yahoo:

Yahoo is known to have one of the biggest Hadoop Cluster and everyone is aware of Yahoo’s contribution to the development of Big Data system. Yahoo is also heavily using Apache Spark Machine learning capabilities to identify topics and news which users are interested in. This is similar to trending tweets or hashtags on Twitter or Facebook. Earlier these Machine Learning algo were developed in C/C++ with thousands of lines of code. While today with Spark and Scala/Pythons these algorithms can be implemented in few hundreds of lines of code. This is a big leap in turnover time as well as code understanding and maintenance. This has been made possible due to Spark to a great extent.

Apache Spark Use cases

Finance

Spark is used in Finance industry across different functional and technology domains.

A typical use case is building a Data Warehouse for batch processing and daily reporting. The Spark data frames abstraction has been used as a generic ingestion platform capable of ingesting data from multiple sources of different formats.

Financial services companies also use Apache Spark MLlib to create and train models for fraud detection. Some of the banks have started using Spark as a tool for classifying text in money transfers.

Some of the companies use Apache spark as log collection, an analysis engine and detection engine.

Let’s look at Spain's 2nd biggest bank BBVA use case where every money transfer a customer makes goes through an engine that infers a category from its textual description. This engine has been developed in Spark, mixes MLLib and own implementations, and is currently into production serving more than 5M customers daily.

Finance for Apache Spark

The challenges that the BBVA technology team faced while building this ML were many:

  • They did not know the data source in advance
  • They did not have a labelled set
  • A fraction of texts is useless (detection rather than classification)
  • Distribution of categories is imbalanced
  • Prefer false negatives over false positives
  • Very short text, language not even syntactically correct

The engineers solved these problems using the Spark MLlib pipeline using some other NLP tools like word2vec.

  • TF-IDF features + linear classifier (98% precision, 21% recall
  • Further tests with word2vec + Vector of Locally Aggregated Descriptors (VLAD)
  • Implemented in Spark/Scala, using MLlib classes
  • Own classes implemented for Multi-class Logistic Regression, VLAD
  • Scala dependency injection useful to quickly setup variants of the above steps

NLP tools for Apache Spark

HealthCare:

Healthcare industry is the newest in adopting advanced technologies like big data and machine learning to provide hi-tech facilities to their patients. Apache Spark is penetrating fast and is becoming the heartbeat in the latest Healthcare applications. Hospitals use these Spark enabled healthcare applications to analyze patients medical history to identify possible health issues based on history and learning.

Also, healthcare produces massive amounts of data and to process so much of the data in quick time and provide insights based on that itself was a challenge which Spark solves with ease.

Another very interesting problem in hospitals is when working with Operating Room(OR) scheduling within a hospital setting is that it is difficult to schedule and predict available OR block times. This leads to empty and unused operating rooms leading to longer waiting times for patients for their procedures.

Let’s see a use case. For a basic surgical procedure, it costs around $15-20 per minute. So, OR is a scarce and valuable resource and it needs to be utilized carefully and optimally. OR efficiency differs depending on the OR staffing and allocation, not the workload. So the loss of efficiency means a loss for the patient. So time and management are the utmost importance here.

HealthCare of Apache Spark

Spark and MLlib solve the problem by developing a predictive model that would identify available OR time 2 weeks in advance, allows hospitals to confirm waitlist cases two weeks in advance instead of when blocks normally release 4 days out. This OR Scheduling can be done by getting the historical data and running then linear regression model with multiple variables.

This model works because:

  • Can coordinate waitlist scheduling logistics with physicians and patients within 2 weeks of surgery.
  • Plan staff scheduling and resources so there are less last-minutes staffing issues for nursing and anaesthesia
  • Utilization metrics show where elective surgical schedule and level demand can be maximized.

Retail: 

Big retail chains have this usual problem of optimising their supply chain to minimize cost and wastage, improve customer service and gain insights into customer’s shopping behaviour to serve better and in the process optimize their profit.

To achieve these goals these retail companies have a lot of challenges like to keep the inventory up to date based on sales and also to predict sales and inventory during some promotional events and sale seasons. Also, they need to keep a track on customer’s orders transit and delivery. All these pose huge technical challenges. Apache Spark and MLlib is being used by a lot of these companies to capture real-time sales and invoice data, ingest it and then figure out the inventory. The technology can also be used to identify in real-time the order’s transit and delivery status. Spark MLlib analytics and predictive models are being used to predict sales during promotions and sale seasons to match the inventory and be ready for the event. The historical data on customer’s buying behaviour is also used to provide the customer with personalized suggestions and improve customer satisfaction. A lot of stores have started using sensors to get data on customer’s location within the store, their preferences, shopping behaviour, etc to provide on-the-spot suggestions and help to find, buy a product by sending messages, using displays etc.

Apache Spark Retail

Travel: 

Airline customer segmentation is a challenging field to understand due to customer’s complex behaviour. Amadeus is one of the main IT solution providers in the airline industry. It has the resources and infrastructure to manage all the ticketing and booking data as well as understanding the Airline needs and market particularities. By combining different data sources produced by different airline systems, they have applied unsupervised machine learning techniques to improve our understanding of customer behaviour.

Challenges in the airline industry are to understand the health of the business:

  • Are any segments growing or shrinking
  • How is the yield developing
  • Tune marketing to specific interests within segments
  • Optimize product offers using fare structures and media offers

Traditional approaches for segmentation were based on business intuition and manually crafter rules set. But these approaches have limitations and prejudices which can sometimes be negative for the business. On the contrary, the data-driven approach is resilient against turn-over, prejudices and market change.

Data Driven Methodology for Apache Spark

With a data-driven approach and using Spark and MLlib, the model is able to extract actionable insights on typical customer behaviour and intentions. Supervised and supervised learning using Spark MLlib techniques at scale are used to train models for prediction. These are then used to assist the customer in deploying the newfound insights into day-to-day operations.

Media: 

Media companies Netflix, Hotstar etc are using Apache Spark at the heart of their technology engine to drive their business. When a user turns on Netflix, he is able to see his favourite content playing automatically. This is achieved through recommendation engines built on Machine learning algorithms and Spark MLlib. Netflix uses historical data from users content selection, trains its ML algorithms, tests it offline and then deploys it live and checks if it works in Production as well.

Netflix has built an engine something called Time Travel using Apache Spark and other big data technologies to: Snapshot online services and use the snapshot data offline to generate features and share facts and features between experiments without calling live systems.

If someone is interested in exploring the details of the use case, one can look at the below link:

Energy: 

Apache Spark is spreading its roots everywhere. A common man not related to software industry may not realise it but there are applications running or extracting data from his home environment and processed in Spark to make his life better and easier. An example we will discuss below is the British Gas.

British Gas is a 200-year-old company. Connected Homes is BG’s IoT “startup”. It is a leader in the UK’s connected home market. Connected Homes is trying to predict the usage consumption patterns of the electricity, gas at the homes and provide consumers with insights so they can smartly use their devices and reduce energy consumption and save energy and money. Connected homes use Apache Spark at the core of its Data Engineering and ML engine.

The challenges are there are millions of electric and gas meters and the meters are read every 30 minutes.

There are:

  • Gas and electricity meter readings
  • Thermostat temperature data
  • Connected boiler data
  • Real-time energy consumption data
  • Introducing motion sensors, window and door sensors, etc

Apache Spark MLlib is used to apply machine learning to these data for disaggregation, similar home comparison and smart meters used in indirect algorithms for non-smart customers.

The analytics engine is used to show customers how they have spent energy, what are their top 3 spends, how can they reduce their energy consumption by showing patterns from smart consumers and smart meters etc. This gives customers a lot of insight and educates customers on optimally using energy at their homes.

Gaming:

Online Gaming industry is another beneficiary of the Apache Spark technology.

Riot Games uses Spark for Combating abusive language in chat in the team games. The challenges in online gaming are:

  • 1% of all players are consistently unsportsmanlike
  • 2% of all games infected by serious toxicity
  • In-Game Toxicity 95% of all serious toxicity comes from players who are otherwise sportsmanlike

To solve this the game developers tried to predict the words used by the gamers in the context of the game or the scenario. They used the “Word2Vec” a neural model which has 256 dimensions embedding months of chat logs. Each word in the chat is document split in spaces and lowercase. The model was trained on NLP for acronyms, short forms, colloquial words etc and the deviations could be huge. The team built a model trained to predict bad/toxic language. The gaming company has 100+ million users every month and so the data is huge. They used Spark MLlib to train their models using different algorithms, one of them Logistic Regression Random Forest Gradient Boosted Trees. The results were impressive for them as they tuned their models for better precision.

Benefits of having Apache Spark for Individual companies

Benefits of having Apache Spark for Individual companies

Many of the companies across industries have been benefiting from Apache Spark. The reasons could be different:

  • Speed of execution
  • Multi-language support
  • Machine learning library
  • Graph processing library
  • Batch processing as well as Stream & Structured stream processing

Apache Spark is beneficial for small as well as large enterprise. Spark offers a complete solution to many of the common problems like ETL and warehousing, Stream data processing, common use case of supervised and unsupervised learning for data analytics and predictive modelling. So with Apache Spark, the technology team does not require to look out for different technology stack and multiple vendors for a solution. This reduces the learning curve for additional development and maintenance. Also, since Spark has support for multiple languages Scala, Java, Python & R, it is easy to find developers.

Limitations:

Though there are so many benefits of Apache Spark as we have seen above, there are few limitations which Apache Spark has. We should be aware of these limitations before we decide to adopt any technology.

  • Apache Spark does not come with an inbuilt file system and it has to depend on HDFS in most of the use cases. If not, it has to be used with some cloud-based data platform.
  • Even though Spark has Stream processing feature it is not exactly real-time processing. It processes in batches which are called micro-batches.
  • Apache Spark is expensive as it catches a lot of data and memory is not cheap.
  • Spark faces issues while working with HDFS which has a very large number of small files.
  • Though Spark MLlib provides machine learning capabilities, it does not come with a very exhaustive list of algorithms. It can solve a lot of ML problems but not all.
  • Apache Spark does not have automatic code optimization process in place and so the code needs to be optimized manually.

Reasons why you should learn Apache Spark

We have seen the wide impact and use cases if Apache Spark. So we know that Spark has become a buzzword these days. We should now also understand why we should learn Spark.

  • Spark offers a complete package for developers and can act as a unified analytics engine. Hence it increases the productivity for the developers. So it ROI for any firm is high and most of the companies dependent on technology are aware of the fact and also willing to put their money in Spark.
  • Learning Spark can help explore the world of Big data and data science. Both these technology fields are the future and bringing transformational changes in almost all industries. So getting exposed to Spark is becoming a necessity for all firms.
  • With fast-paced Spark adoption by organizations, it is opening up new prospects in business and many of the applications have proved that instead of business driving technology it is becoming vice versa now, that technology is driving business.
  • According to a survey, there is a huge demand for Spark engineers. Today, there are well over 1,000 contributors to the Apache Spark project across 250+ companies worldwide. Recently, Indeed.com listed over 2,400 full-time open positions for Apache Spark professionals across various industries including enterprise technology, e-commerce/retail, healthcare, and life sciences, oil and gas, manufacturing, and more.
  • Apache Spark developers earn the highest average salary among all other programmers. So this is another and one of the major incentives one can get to learn and expertise Spark.

One can look at an old survey by Databricks to understand the importance and impact of Apache Spark by 2016. By 2019 these number would have grown much bigger.

Conclusion

Apache Spark has capabilities to process huge amount of data in a very efficient manner with high throughput. It can solve problems related to batch processing, near real-time processing, can be used to apply lambda architecture, can be used for Structured streaming. Also, it can solve many of the complex data analytics and predictive analytics problems with the help of the MLlib component which comes out of the box. Apache Spark has been making a big impact on the whole data engineering and data science gamut at scale.

Nitin

Nitin Kumar

Blog Author

I am an Alumni of IIT(ISM) Dhanbad. I have 15+ years of experience in Software industry working with Investment Banking and Financial Services domain. I have worked for Wall Street banks like Morgan Stanley and JP Morgan Chase. I have been working on Big Data technologies like hadoop,spark, cloudera for 3+ years.

Join the Discussion

Your email address will not be published. Required fields are marked *

1 comments

Navya venkey 18 Jun 2019

It’s really really great information for becoming a better Blogger. Keep sharing, Thanks

Suggested Blogs

Apache Spark Pros and Cons

Apache Spark:  The New ‘King’ of Big DataApache Spark is a lightning-fast unified analytics engine for big data and machine learning. It is the largest open-source project in data processing. Since its release, it has met the enterprise’s expectations in a better way in regards to querying, data processing and moreover generating analytics reports in a better and faster way. Internet substations like Yahoo, Netflix, and eBay, etc have used Spark at large scale. Apache Spark is considered as the future of Big Data Platform.Pros and Cons of Apache SparkApache SparkAdvantagesDisadvantagesSpeedNo automatic optimization processEase of UseFile Management SystemAdvanced AnalyticsFewer AlgorithmsDynamic in NatureSmall Files IssueMultilingualWindow CriteriaApache Spark is powerfulDoesn’t suit for a multi-user environmentIncreased access to Big data-Demand for Spark Developers-Apache Spark has transformed the world of Big Data. It is the most active big data tool reshaping the big data market. This open-source distributed computing platform offers more powerful advantages than any other proprietary solutions. The diverse advantages of Apache Spark make it a very attractive big data framework. Apache Spark has huge potential to contribute to the big data-related business in the industry. Let’s now have a look at some of the common benefits of Apache Spark:Benefits of Apache Spark:SpeedEase of UseAdvanced AnalyticsDynamic in NatureMultilingualApache Spark is powerfulIncreased access to Big dataDemand for Spark DevelopersOpen-source community1. Speed:When comes to Big Data, processing speed always matters. Apache Spark is wildly popular with data scientists because of its speed. Spark is 100x faster than Hadoop for large scale data processing. Apache Spark uses in-memory(RAM) computing system whereas Hadoop uses local memory space to store data. Spark can handle multiple petabytes of clustered data of more than 8000 nodes at a time. 2. Ease of Use:Apache Spark carries easy-to-use APIs for operating on large datasets. It offers over 80 high-level operators that make it easy to build parallel apps.The below pictorial representation will help you understand the importance of Apache Spark.3. Advanced Analytics:Spark not only supports ‘MAP’ and ‘reduce’. It also supports Machine learning (ML), Graph algorithms, Streaming data, SQL queries, etc.4. Dynamic in Nature:With Apache Spark, you can easily develop parallel applications. Spark offers you over 80 high-level operators.5. Multilingual:Apache Spark supports many languages for code writing such as Python, Java, Scala, etc.6. Apache Spark is powerful:Apache Spark can handle many analytics challenges because of its low-latency in-memory data processing capability. It has well-built libraries for graph analytics algorithms and machine learning.7. Increased access to Big data:Apache Spark is opening up various opportunities for big data and making As per the recent survey conducted by IBM’s announced that it will educate more than 1 million data engineers and data scientists on Apache Spark. 8. Demand for Spark Developers:Apache Spark not only benefits your organization but you as well. Spark developers are so in-demand that companies offering attractive benefits and providing flexible work timings just to hire experts skilled in Apache Spark. As per PayScale the average salary for  Data Engineer with Apache Spark skills is $100,362. For people who want to make a career in the big data, technology can learn Apache Spark. You will find various ways to bridge the skills gap for getting data-related jobs, but the best way is to take formal training which will provide you hands-on work experience and also learn through hands-on projects.9. Open-source community:The best thing about Apache Spark is, it has a massive Open-source community behind it. Apache Spark is Great, but it’s not perfect - How?Apache Spark is a lightning-fast cluster computer computing technology designed for fast computation and also being widely used by industries. But on the other side, it also has some ugly aspects. Here are some challenges related to Apache Spark that developers face when working on Big data with Apache Spark.Let’s read out the following limitations of Apache Spark in detail so that you can make an informed decision whether this platform will be the right choice for your upcoming big data project.No automatic optimization processFile Management SystemFewer AlgorithmsSmall Files IssueWindow CriteriaDoesn’t suit for a multi-user environment1. No automatic optimization process:In the case of Apache Spark, you need to optimize the code manually since it doesn’t have any automatic code optimization process. This will turn into a disadvantage when all the other technologies and platforms are moving towards automation.2. File Management System:Apache Spark doesn’t come with its own file management system. It depends on some other platforms like Hadoop or other cloud-based platforms.3. Fewer Algorithms:There are fewer algorithms present in the case of Apache Spark Machine Learning Spark MLlib. It lags behind in terms of a number of available algorithms.4. Small Files Issue:One more reason to blame Apache Spark is the issue with small files. Developers come across issues of small files when using Apache Spark along with Hadoop. Hadoop Distributed File System (HDFS) provides a limited number of large files instead of a large number of small files.5. Window Criteria:Data in Apache Spark divides into small batches of a predefined time interval. So Apache won't support record-based window criteria. Rather, it offers time-based window criteria.6. Doesn’t suit for a multi-user environment:Yes, Apache Spark doesn’t fit for a multi-user environment. It is not capable of handling more users concurrency.Conclusion:To sum up, in light of the good, the bad and the ugly, Spark is a conquering tool when we view it from outside. We have seen a drastic change in the performance and decrease in the failures across various projects executed in Spark. Many applications are being moved to Spark for the efficiency it offers to developers. Using Apache Spark can give any business a boost and help foster its growth. It is sure that you will also have a bright future!
Rated 4.5/5 based on 19 customer reviews
8596
Apache Spark Pros and Cons

Apache Spark:  The New ‘King’ of Big DataApac... Read More

Fundamentals of Apache Spark

IntroductionBefore getting into the fundamentals of Apache Spark, let’s understand What really is ‘Apache Spark’ is? Following is the authentic one-liner definition.Apache Spark is a fast and general-purpose, cluster computing system.One would find multiple definitions when you search the term Apache Spark. All of those give similar gist, just different words. Let’s understand these special keywords which describe Apache Spark. Fast: As spark uses in-memory computing it’s fast. It can run queries 100x faster. We will get to details of architecture later to understand this aspect better little later in the article. One would find the keywords ‘Fast’ and/or ‘In-memory’ in all the definitions. General Purpose: Apache spark is a unified framework. It provides one execution model for all tasks and hence very easy for developers to learn and they can work with multiple APIs easily. Spark offers over 80 high-level operators that make it easy to build parallel apps and one can use it interactively from the Scala, Python, R, and SQL shells.Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.Cluster Computing: Efficient processing of data on Set of computers (Refer commodity hardware here) or distributed systems. It’s also called a Parallel Data processing Engine in a few definitions. Spark is utilized for Big data analytics and related processing. One more important keyword associated with Spark is Open Source. It was open-sourced in 2010 under a   BSD license.Spark (and its RDD) was developed(earliest version as it’s seen today), in 2012, in response to limitations in the   MapReduce cluster computing paradigm. Spark is commonly seen as an in-memory replacement of MapReduce.Since its release, Apache Spark has seen rapid adoption due to its characteristics briefly discussed above.Who should go for Apache SparkBefore trying to find out whether Apache spark is for me? Or whether I have the right skill set, It's important to focus on the generality characteristic in further depth.Apache Spark consists of Spark Core and a set of libraries. The core is the distributed execution engine and the Java, Scala, and Python APIs offer a platform for distributed ETL application development. Additional libraries, built atop the core, allow diverse workloads for streaming, SQL, and machine learning.As Spark provides these multiple components, it’s evident that Spark is developed and widely utilized for big data and analytics.  Professionals who should learn Apache SparkIf one is aspiring to be landed into the following professions or anyone who has an interest in data and insights, Knowledge of spark will prove useful:Data ScientistsData EngineersPrerequisites of learning Apache SparkMost of the students looking for big data training, Apache spark is number one framework in big data. So most of the knowledge seekers looking for spark training, it is important to note that there are few prerequisites to learn apache spark.Before getting into Big data, you must have minimum knowledge on:Anyone of the programming languages >> Core   Python or Scala.Spark installations can be done on any platform but its framework is similar to Hadoop and hence having knowledge of HDFS and YARN is highly recommended. Having knowledge of Hive is an added advantage but is not mandatory.Basic knowledge of SQL. In SQL mainly select * from, joins and group by these three commands highly recommended.Optionally, knowing any cloud technology like AWS. Recommended for those who want to work with production-like environments.System requirements of Apache SparkOfficial site for  Apache Spark gives following recommendation (Traverse link for further details)Storage System: There are few ways to set this up as follows: Spark can run on the same node as HDFS. Spark standalone node cluster can be installed on the same nodes and configure Spark and Hadoop memory and CPU usage accordingly to avoid any interference.Or,1. Hadoop and Spark can execute on common Resource Manager ( Ex. Yarn etc)Or,2. Spark will be executing in same Local Area Network as HDFS but on separate nodes.Or3. If a requirement is a quick response and low latency from data stores then execute compute jobs on separate nodes than that of storage nodes.Local Disks: Typically 4-8 disks per node, configured without RAID.If underline OS is Linux then mount the disk with noatime option and in Spark environment configure spark.local.dir variable to be a comma-separated list of local disks.Note: For HDFS, it can be the same disk as HDFS.Memory: Minimum 8GB - 100s of GBs of memory per machine.A recommendation is the allocation of 75% of the memory to Spark.Network: 10GB or faster speed network.CPU cores: 8-16 Cores per machineHowever, for Training and Learning purpose and just to taste Spark, following are two available options: Run it locally Use AWS EMR (Or any cloud computing service)For learning purposes, minimum 4gb ram system with minimum 30gb disk may prove enough.History of Apache SparkSpark was primarily developed to Overcome the Limitations of MapReduce.Versioning: Spark initial version was version 0, version 1.6 is assumed to be a stable version and is being used in multiple commercial corporate projects. Version 2.3 is the latest available version. MapReduce is cluster computing  paradigm, which forces a particular linear  data flow structure on distributed programs: MapReduce programs read input data from disk,  map a function across the data,  reduce the results of the map, and store reduction results on disk. Due to multiple copies of data and multiple I/O as described, MapReduce takes lots of time to process the volume of data. MapReduce can do only batch time processing and is unsuitable for real-time data processingIt is unsuitable for trivial join like transformations. It’s unfit for large data on a network and also with OLTP data.Also, not suitable for graphics and interactive data.Spark overcomes all these limitations and able to do faster processing too on the local disk as well.Why Apache Spark?Numerous advantages of Spark have made its a market favorite.Let’s discuss one by one.Speed: Extends MapReduce Model to support computations like stream processing and interactive queries.Single Combination for processes and multiple tools:  Covers multiple workloads ( in a traditional system, it used to require different distributed systems), which makes combining different processing types and ease of tool management.Unification: Developers have to learn only one platform unlike multiple languages and tools in a traditional system.Support to different Resource Managers: Spark supports Hadoop HDFS system, and YARN for resource management but it’s not the only resource manager it supports. It works on MESOS and on any standalone scheduler like spark resource manager.Support for cutting-edge Innovation: Spark provides capabilities and support for an array of new-age technologies ranging from built-in machine learning libraries,   visualization tools, support for near processing (which was in a way the biggest challenge pre- spark era) and supports seamless integration with other deep learning frameworks like TensorFlow. This enables Spark to provide an innovative solution for new age use-cases.Spark can access diverse data sources and make sense of them all and hence it’s trending in the market over any other cluster computing software available. Who uses Apache SparkListing a few use cases of Apache spark below :1. Analytics - Spark can be very useful when building real-time analytics from a stream of incoming data.2. E-commerce - Information about the real-time transaction can be passed to streaming clustering algorithms like alternating least squares or K-means clustering algorithm. The results can be combined with data from other sources like social media profiles, product reviews on forums, customer comments, etc. to enhance the recommendations to customers based on new trends.Shopify: At Shopify, we underwrite credit card transactions, exposing us to the risk of losing money. We need to respond to risky events as they happen, and a traditional ETL pipeline just isn’t fast enough. Spark Streaming is an incredibly powerful real-time data processing framework based on Apache Spark. It allows you to process real-time streams like Apache Kafka using Python with incredible simplicity.Alibaba: Alibaba Taobao operates one of the world’s largest e-commerce platforms. We collect hundreds of petabytes of data on this platform and use Apache Spark to analyze these enormous amounts of data.3. Healthcare Industry –Healthcare has multiple use-cases of unstructured data to be processed in real-time. It has data ranging from image formats like scans etc to specific medical industry standards and wearable tracking devices. Many healthcare providers are keen on using spark for patient’s records to build 360 degrees view of the patient to do accurate diagnosis.MyFitnessPal: MyFitnessPal needed to deliver a new feature called “Verified Foods.” The feature demanded a faster pipeline to execute a number of highly sophisticated algorithms. Their legacy non-distributed Java-based data pipeline was slow, did not scale, and lacked flexibility.Here are a few other examples from industry leaders:Regeneron: Future of Drug Discovery with Genomics at Scale powered by SparkZeiss: Using Spark Structured Streaming for Predictive MaintenanceDevon Energy: Scaling Geographic Analytics with Spark GraphXYou can also learn more about use cases of Apache Spark  here.Career Benefits:Career Benefits of Spark for you as an individual:Apache Spark developers earn the highest average salary among all other programmers. According to its  2015 Data Science Salary Survey, O’Reilly found strong correlations between those who used Apache Spark and those who were paid more money. In one of its models, using Spark added more than $11,000 to the median salary.If you’re considering switching to this extremely in-demand career then taking up the  Apache Spark training will be an added advantage. Learning Spark will give you a steep competitive edge and can land you up in market best-paying jobs with top companies. Spark has gained enough adherents over the years to place it high on the list of fastest-growing skills; data scientists and sysadmins have evaluated the technology and clearly seen what they liked.  April’s Dice Report explored the fastest-growing technology skills, based on an analysis of job postings and data from Dice’s annual salary survey. The results are below; percentages are based on year-over-year growth in job postings:Benefits of Spark implementing Spark in your organization:Apache spark is now a decade older but still going strong. Due to lightning-fast processing and numerous other advantages discussed so far, Spark is still the first choice of many organizations.Spark is considered to be the most popular open-source project on the planet, with more than 1,000 contributors from 250-plus organizations, according to Databricks.ConclusionTo sum up, Spark helps to simplify the computationally intensive task of processing high volumes of real-time or batch data. It can seamlessly integrate with complex capabilities such as machine learning and graph algorithms. In short, Spark brings exclusive Big Data processing (which earlier was only for giant companies like Google) to the masses.Do let us know how your learning experience was, through comments below.Happy Learning!!!
Rated 4.5/5 based on 13 customer reviews
9785
Fundamentals of Apache Spark

IntroductionBefore getting into the fundamentals o... Read More

How Big Data Can Help You Understand Your Customers and Grow Your Business

What’s the main purpose of a marketing campaign for any business? You’re trying to convince the customers you offer exactly what they need. What do you do to get there? You find out what they need. This is where big data gets into the picture. Big data is a general term for all information that allows you to understand the purchasing decisions of your target consumers That’s not all. Big data also helps you create a sustainable budget, find the best way to manage your business, beat the competition, and create higher revenue. In essence, big data is all information that helps you grow your brand. The process of analyzing and successfully using that data is called big data analytics. Now that we got the definition out of the way, let’s get practical. We’ll help you realize how you can use big data to understand the behavior of your customers and grow your brand. Where Can You Find Big Data? This is the big question about big data: where do you find it? When you’re looking for data that you could immediately turn into useful information, you should start with the historical data of your business. This includes all information for your business you collected since it was formed. The earnings, revenues, stock price action… everything you have. That data is already available to you. You can use it to understand how your business worked under different circumstances. The US Census Bureau holds an enormous amount of data regarding US citizens. You can use the information about the population economy, and products to understand the behavior of your target consumers. gov is another great website to explore. It gives you data related to consumers, ecosystems, education, finance, energy, public safety, health, agriculture, manufacturing, and few other categories. Explore the field relevant to your business and you’ll find data you can use. This information is for US citizens. If you need a similar tool for the EU, you can explore the European Union Open Data Portal. Facebook’s Graph API gives you a huge amount of information about the users of the platform. How to Use Big Data to Your Brand’s Advantage Collecting big data is not that hard. Information is everywhere. However, the huge volume of information you collect might confuse you. For now, you might want to focus on the historical data for your business. That should be enough for you to understand the behavior of your customers. When you understand how the analytics work, you can start comparing your historical data with the information you get from governmental and social media sources. These are the main questions to ask when analyzing big data: The average amount your customers spend on a typical purchase. This information helps you understand their budget and spending habits. Did they spend more money on an average purchase when they used promotions? What’s the situation with conversion? How many of the social media followers follow a link and become actual customers? These rates help you determine the effect of your marketing campaign. When you understand it, you’ll be able to improve it. How many new customers did you attract through promotions? Did those activities help you increase the awareness for your brand? How much have you spent on marketing and sales to attract a single customer? Divide the total amount of expenses for promotional activities with the number of customers you attracted while the campaign lasted. You’ll get the acquisition cost of a single customer. If it’s too large, you’ll need to restructure your promotional activities. Compare historical data to identify the campaigns that were most and least successful in this aspect. What do your customers require in order to stay loyal to your brand? Do they ask for more support or communication? How satisfied are your customers with the products or services you offer? What’s the difference between the categories of happy and unhappy customers? When you determine the factors that make your customers happy, you’ll be able to expand on them. When you identify the things that lead to dissatisfaction, you’ll work on them. Every Business Benefits from Big Data You feel like you have to own a huge business to get interested about big data? That’s a misconception. It doesn’t matter how big your company is. You still have tons of data to analyze, and you can definitely benefit from it & bigdata solve problems easily Collect all data described above and compare it with the way your customers behaved in the past. Are you growing? If yes, why? If not, why? The key to understanding the behavior of your customers is to give this information a human face. Connect the numbers with the habits and spending behavior of your real customers. When you relate the data to actual human experience, you’ll be able to develop customers personas. You’ll increase the level of satisfaction your consumers get. When you do that, the growth of your business will be inevitable.
Rated 4.0/5 based on 20 customer reviews
How Big Data Can Help You Understand Your Customer...

What’s the main purpose of a marketing campaign ... Read More

20% Discount