Search

Apache Spark Vs MapReduce

Why we need Big Data frameworksBig data is primarily defined by the volume of a data set. Big data sets are generally huge – measuring tens of terabytes – and sometimes crossing the threshold of petabytes. It is surprising to know how much data is generated every minute. As estimated by DOMO:Over 2.5 quintillion bytes of data are created every single day, and it’s only going to grow from there. By 2020, it’s estimated that 1.7MB of data will be created every second for every person on earth.You can read DOMO's full report, including industry-specific breakdowns, here.To store and process even only a fraction of this amount of data, we need Big Data frameworks as the traditional Databases would not be able to store so much of data nor traditional processing systems would be able to process this data quickly. Here comes the frameworks like Apache Spark and MapReduce to our rescue and help us to get deep insights into this huge amount of structured, unstructured and semi-structured data and make more sense out of it.Market Demands for Spark and MapReduceApache Spark was originally developed in 2009 at UC Berkeley by the team who later founded Databricks. Since its launch Spark has seen rapid adoption and growth. Most of the cutting-edge technology organizations like Netflix, Apple, Facebook, Uber have massive Spark clusters for data processing and analytics. The demand for Spark is increasing at a very fast pace. According to marketanalysis.com report forecast, the global Apache Spark market will grow at a CAGR of 67% between 2019 and 2022. The global Spark market revenue is rapidly expanding and may grow up $4.2 billion by 2022, with a cumulative market valued at $9.2 billion (2019 – 2022).MapReduce has been there for a little longer after being developed in 2006 and gained industry acceptance during the initial years. But at last, 5 years or so with Apache Spark gaining more ground, demand for MapReduce as the processing engine has reduced. But, it cannot be said in black and white that MapReduce will be completely replaced by Apache Spark in the coming years. Both the technologies have their own pros and cons as we will see them below. One solution cannot fit at all the places, so MapReduce will have its own takers depending on the problem to be solved.Also, Spark and MapReduce do complement each other on many occasions.Both these technologies have made inroads in all walks of common man’s life. You name the industry and its there. Be it telecommunication, e-commerce, banking, insurance, healthcare, medicine, agriculture, biotechnology, etc.What is Spark?As per Apache, “Apache Spark is a unified analytics engine for large-scale data processing”. Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.Spark, instead of just “map” and “reduce” functions, defines a large set of operations called transformations and actions for the developers and which are ultimately transformed to map/reduce by the spark execution engine and these operations are arbitrarily combined for highly optimized performance.Spark is developed in Scala language and it can run on Hadoop in standalone mode using its own default resource manager as well as in Cluster mode using YARN or Mesos resource manager. It is not mandatory to use Hadoop for Spark, it can be used with S3 or Cassandra also. But, in the majority of the cases, Hadoop is the best fit as Spark’s data storage layer.Features of SparkSpeed: Spark enables applications running on Hadoop to run up to 100x faster in memory and up to 10x faster on disk. Spark achieves this by minimising disk read/write operations for intermediate results and storing in memory and perform disk operations only when essential. Spark achieves this using DAG, query optimizer and highly optimized physical execution engine.Fault Tolerance: Apache Spark achieves fault tolerance using spark abstraction layer called RDD (Resilient Distributed Datasets), which are designed to handle worker node failure.Lazy Evaluation: All the processing(transformations) on Spark RDD/Datasets are lazily evaluated, i.e. the output RDD/datasets are not available right away after transformation but will be available only when an action is performed.Dynamic nature: Spark offers over 80 high-level operators that make it easy to build parallel apps.Multiple Language Support: Spark provides multiple programming language support and you can use it interactively from the Scala, Python, R, and SQL shells.Reusability: Spark code can be used for batch-processing, joining streaming data against historical data as well as run ad-hoc queries on streaming state.Machine Learning: Apache Spark comes with out of the box support for machine learning called MLib which can be used for complex, predictive data analytics.Graph Processing: GraphX is Apache Spark's API for graphs and graph-parallel computation. You can view the same data as both graphs and collections, transform and join graphs with RDDs efficiently, and write custom iterative graph algorithms using the Pregel API.Real-Time Stream Processing: Spark Streaming brings Apache Spark's language-integrated API to stream processing, letting you write streaming jobs the same way you write batch jobs.Where is Spark usually used?Spark is used by 1000+ organizations in Production. Many of these organizations are known to run Spark clusters of 1000+ nodes. In terms of data size, Spark has been shown to work well up to petabytes. It has been used to sort 100 TB of data 3X faster than Hadoop MapReduce (which sorted 100 TB of data in 23 min, using 2100 machines) using 10X fever machines, winning the 2014 Daytona GraySort Benchmark, as well as to sort 1 PB. Several production workloads use Spark to do ETL and data analysis on PBs of data. Below are some examples where Spark is used across industries:AsiaInfo: Uses Spark Core, Streaming, MLlib and Graphx and Hadoop to build cost-effective data centre solution for our customers in the telecom industry as well as other industrial sectors.Atp: Predictive models and learning algorithms to improve the relevance of programmatic marketing.Credit Karma: Creates personalized experiences using SparkeBay Inc: Using Spark core for log transaction aggregation and analyticsKelkoo: Using Spark Core, SQL, and Streaming. Product recommendations, BI and analytics, real-time malicious activity filtering, and data mining.More examples can be found on Apache’s  Powered By pageSpark Example in Scala (Spark shell can be used for this)// “sc” is a “Spark context” – this transforms the file into an RDD val textFile = sc.textFile("data.txt") // Return number of items (lines) in this RDD; count() is an action textFile.count() // Demo filtering.  Filter is a transform.  By itself this does no real work val linesWithSpark = textFile.filter(line => line.contains("Spark")) // Demo chaining – how many lines contain “Spark”?  count() is an action. textFile.filter(line => line.contains("Spark")).count() // Length of line with most words.  Reduce is an action. textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b) // Word count – traditional map-reduce.  collect() is an action val word Counts = text File.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b) word Counts.collect()Sample Spark Transformationsmap(func): Return a new distributed dataset formed by passing each element of the source through a function func.filter(func): Return a new dataset formed by selecting those elements of the source on which func returns trueunion(other Dataset): Return a new dataset that contains the union of the elements in the source dataset and the argument.Sample Spark Actionsreduce(func): Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel.collect(): Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.count(): Return the number of elements in the dataset.The data is referred from the RDD Programming guide.What is MapReduce?MapReduce is a programming model for processing and generating large data sets with a parallel, distributed algorithm on a cluster.Programmers have been writing parallel programs for a long time in different languages like C++, Java, C#, Python. But, they have their own nuances and maintaining these, is the programmer's responsibility. There are chances of application crashing, performance hit, incorrect results. Also, such systems if grows very large is not very fault tolerant or difficult to maintain.MapReduce has simplified all these. Fault tolerance, parallel execution, resources management is all responsibility of the Resource manager and the framework. Programmers have to only concentrate on business logic by writing only map and reduce functions.Brief Description of MapReduce ArchitectureA MapReduce application has broadly two functions called map and reduce.Map: Mapper process takes input as key/value pair, processes them i.e. performs some computation and then produces intermediate results as key/value pairsi.e. map(k1,v1) ---> list(k2,v2)Reduce: Reducer process receives an intermediate key and a set of values in sorted order. It processes these and generates output key/value pairs by grouping values for each key.i.e. reduce(k2, list(v2)) ---> list(v3)Can also define an option function “Combiner” (to optimize bandwidth)If defined, runs after Mapper & before Reducer on every node that has run a map taskCombiner receives as input all data emitted by the Mapper instances on a given nodeCombiner output sent to the Reducers, instead of the output from the MappersIs a "mini-reduce" process which operates only on data generated by one machineHow does MapReduce work?MapReduce is usually applied to huge datasets. A MapReduce job splits the input data into smaller independent chunks called partitions and then processes them independently using map tasks and reduce tasks. Below is an example.MapReduce Word Count (Pseudocode)map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));MapReduceApply a function to all the elements of Listlist1=[1,2,3,4,5]; square x = x * x list2=Map square(list1) print list2 -> [1,4,9,16,25]0Combine all the elements of list for a summarylist1 = [1,2,3,4,5]; A = reduce (+) list1 Print A -> 15Apache Spark vs MapReduceAfter getting off hangover how Apache Spark and MapReduce works, we need to understand how these two technologies compare with each other, what are their pros and cons, so as to get a clear understanding which technology fits our use case.As we can see, MapReduce involves at least 4 disk operations whereas Spark only involves 2 disk operations. This is one reason for Spark to be much faster than MapReduce. Spark also caches intermediate data which can be used in further iterations helping Spark improve its performance further. The more iterative the process the better is the Spark performance due to in-memory processing and caching. This is where MapReduce performance not as good as Spark due to disk read/write operations for every iteration.Let’s see a comparison between Spark and MapReduce on different other parameters to understand where to use Spark and where to use MapReduceAttributesMapReduceApache SparkSpeed/PerformanceMapReduce is designed for batch processing and is not as fast as Spark. It is used for gathering data from multiple sources and process it once and store in a distributed data store like HDFS. It is best suited where memory is limited and processing data size is so big that it would not fit in the available memory.Spark is 10-100 times faster because of in-memory processing and its caching mechanism. It can deliver near real-time analytics. It is used in Credit Card Processing, Fraud detection, Machine learning and data analytics, IoT sensors etcCostAs it is part of Apache Open Source there is no software cost.Hardware cost is less in MapReduce as it works with smaller memory(RAM) as compared to Spark. Even commodity hardware is sufficient.Spark also is Apache Open Source so no license cost.Hardware cost is more than MapReduce as even though Spark can work on commodity hardware it needs a lot more memory(RAM) as compared to MapReduce since it should be able to fit all the data in Memory for optimal performance. Cluster needs little high-end commodity hardware with lots of RAM else performance gets hitEase of UseMapReduce is a bit complex to write. MapReduce is written in Java and the APIs are a bit complex to code for new programmers, so there is a steep learning curve involved. The Pig has SQL like syntax and it is easier for SQL developers to get onboard easily. Also, there is no interactive mode available in MapReduceSpark has APIs in Scala, Java, Python, R for all basic transformations and actions. It also has rich Spark SQL APIs for SQL savvy developers and it covers most of the SQL functions and is adding more functions with each new release. Also, Spark has scope for writing User Defined Analytical Functions and Functions (UDF/UDAF) for anyone who would like to have custom functions.CompatibilityMapReduce is also compatible with all data sources and file formats Hadoop supports. But MapReduce needs another Scheduler like YARN or Mesos to run, it does not have any inbuilt Scheduler like Spark’s default/standalone scheduler.Apache Spark can in standalone mode using default scheduler. It can also run on YARN or Mesos. It can run on-premise or on the cloud. Spark supports most of the data formats like parquet, Avro, ORC, JSON, etc. It also supports multiple languages and has APIs for Java, Scala, Python, R.Data ProcessingMapReduce can only be used for batch processing where throughput is more important and latency can be compromised.Spark supports Batch as well as Stream processing, so fits both use cases and can be used for Lambda design where applications need both Speed layer and slower layer/data processing layerSecurityMapReduce has more security features.MapReduce can enjoy all the Hadoop security benefits and integrate with Hadoop security projects, like Knox Gateway and Sentry.Spark is a bit bare at the moment. Spark currently supports authentication via a shared secret. Spark can integrate with HDFS and it can use HDFS ACLs and file-level permissions. Spark can also run on YARN leveraging the capability of Kerberos.Fault ToleranceMapReduce uses replication for fault tolerance. If any slave daemon fails, master daemons reschedule all pending and in-progress operations to another slave. This method is effective, but it can significantly increase the completion times for operations with a single failure alsoIn Spark, RDDs are the building blocks and Spark also uses it RDDs and DAG for fault tolerance. If an RDD is lost, it will automatically be recomputed by using the original transformations.LatencyMapReduce has high latencySpark provides low latency performanceInteractive ModeMapReduce does not have any interactive mode of operation.Spark can be used interactively also for data processing. It has out-of-the-box support for spark shell for scala/python/RMachine Learning/Graph ProcessingNo support for these. A mahout has to be used for MLSpark has dedicated modules for ML and Graph processingBoth these technologies MapReduce and Spark have pros and cons:MapReduce is best suited for Analysis of archived data where data size is huge and it is not going to fit in memory, and if the instant results and intermediate solutions are not required. MapReduce also scales very well and the cluster can be horizontally scaled with ease using commodity machines.Offline Analytics is a good fit for MapReduce like Top Products per month, Unique clicks per banner.MapReduce is also suited for Web Crawling as well as Crawling tweets at scale and NLP like Sentiment Analysis.Another use case for MapReduce is de-duplicating data from social networking sites, job sites and other similar sites.MapReduce is also heavily used in Data mining for Generating the model and then classifying.Spark is fast and so can be used in Near Real Time data analysis.A lot of organizations are moving to Spark as their ETL processing layer from legacy ETL systems like Informatica. Spark as very good and optimized SQL processing module which fits the ETL requirements as it can read from multiple sources and can also write to many kinds of data sources.Spark can also handle Streaming data so its best suited for Lambda design.Most graph processing algorithms like page rank perform multiple iterations over the same data and this requires a message passing mechanism. Spark has great support for Graph processing using GraphX module.Almost all machine learning algorithms work iteratively. Spark has a built-in scalable machine learning library called MLlib which contains high-quality algorithms that leverage iterations and yields better results than one pass approximations sometimes used on MapReduce.Hadoop MapReduce is more mature as it has been there for a longer time and its support is also better in the open source community. It can be beneficial for really big data use case where memory is limited and data will not fit the RAM. Most of the time, Spark use case will involve Hadoop and other tools like Hive, Pig, Impala and so when these technologies complement each other it will be a win for both Spark and MapReduce.Conclusion:Hadoop Mapreduce is more mature as it has been there for a longer time and its support is also better in the open source community. It can be beneficial for really big data use case where memory is limited and data will not fit the RAM. Most of the time, Spark use case will involve Hadoop and other tools like Hive, Pig, Impala and so when these technologies complement each other it will be a win for both Spark and Mapreduce.
Rated 4.5/5 based on 1 customer reviews

Apache Spark Vs MapReduce

9K
  • by Nitin Kumar
  • 09th May, 2019
  • Last updated on 18th Jul, 2019
  • 20 mins read
Apache Spark Vs MapReduce

Why we need Big Data frameworks

Big data is primarily defined by the volume of a data set. Big data sets are generally huge – measuring tens of terabytes – and sometimes crossing the threshold of petabytes. It is surprising to know how much data is generated every minute. As estimated by DOMO:

Over 2.5 quintillion bytes of data are created every single day, and it’s only going to grow from there. By 2020, it’s estimated that 1.7MB of data will be created every second for every person on earth.

You can read DOMO's full report, including industry-specific breakdowns, here.

To store and process even only a fraction of this amount of data, we need Big Data frameworks as the traditional Databases would not be able to store so much of data nor traditional processing systems would be able to process this data quickly. Here comes the frameworks like Apache Spark and MapReduce to our rescue and help us to get deep insights into this huge amount of structured, unstructured and semi-structured data and make more sense out of it.

Market Demands for Spark and MapReduce

Apache Spark was originally developed in 2009 at UC Berkeley by the team who later founded Databricks. Since its launch Spark has seen rapid adoption and growth. Most of the cutting-edge technology organizations like Netflix, Apple, Facebook, Uber have massive Spark clusters for data processing and analytics. The demand for Spark is increasing at a very fast pace. According to marketanalysis.com report forecast, the global Apache Spark market will grow at a CAGR of 67% between 2019 and 2022. The global Spark market revenue is rapidly expanding and may grow up $4.2 billion by 2022, with a cumulative market valued at $9.2 billion (2019 – 2022).

MapReduce has been there for a little longer after being developed in 2006 and gained industry acceptance during the initial years. But at last, 5 years or so with Apache Spark gaining more ground, demand for MapReduce as the processing engine has reduced. But, it cannot be said in black and white that MapReduce will be completely replaced by Apache Spark in the coming years. Both the technologies have their own pros and cons as we will see them below. One solution cannot fit at all the places, so MapReduce will have its own takers depending on the problem to be solved.

Also, Spark and MapReduce do complement each other on many occasions.

Both these technologies have made inroads in all walks of common man’s life. You name the industry and its there. Be it telecommunication, e-commerce, banking, insurance, healthcare, medicine, agriculture, biotechnology, etc.

What is Spark?

As per Apache, “Apache Spark is a unified analytics engine for large-scale data processing”. Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.

Spark, instead of just “map” and “reduce” functions, defines a large set of operations called transformations and actions for the developers and which are ultimately transformed to map/reduce by the spark execution engine and these operations are arbitrarily combined for highly optimized performance.

What is Spark

Spark is developed in Scala language and it can run on Hadoop in standalone mode using its own default resource manager as well as in Cluster mode using YARN or Mesos resource manager. It is not mandatory to use Hadoop for Spark, it can be used with S3 or Cassandra also. But, in the majority of the cases, Hadoop is the best fit as Spark’s data storage layer.

Features of Spark

Features of Spark

Speed: Spark enables applications running on Hadoop to run up to 100x faster in memory and up to 10x faster on disk. Spark achieves this by minimising disk read/write operations for intermediate results and storing in memory and perform disk operations only when essential. Spark achieves this using DAG, query optimizer and highly optimized physical execution engine.

Fault Tolerance: Apache Spark achieves fault tolerance using spark abstraction layer called RDD (Resilient Distributed Datasets), which are designed to handle worker node failure.

Lazy Evaluation: All the processing(transformations) on Spark RDD/Datasets are lazily evaluated, i.e. the output RDD/datasets are not available right away after transformation but will be available only when an action is performed.

Dynamic nature: Spark offers over 80 high-level operators that make it easy to build parallel apps.

Multiple Language Support: Spark provides multiple programming language support and you can use it interactively from the Scala, Python, R, and SQL shells.

Reusability: Spark code can be used for batch-processing, joining streaming data against historical data as well as run ad-hoc queries on streaming state.

Machine Learning: Apache Spark comes with out of the box support for machine learning called MLib which can be used for complex, predictive data analytics.

Graph Processing: GraphX is Apache Spark's API for graphs and graph-parallel computation. You can view the same data as both graphs and collections, transform and join graphs with RDDs efficiently, and write custom iterative graph algorithms using the Pregel API.

Real-Time Stream Processing: Spark Streaming brings Apache Spark's language-integrated API to stream processing, letting you write streaming jobs the same way you write batch jobs.

Where is Spark usually used?

Spark is used by 1000+ organizations in Production. Many of these organizations are known to run Spark clusters of 1000+ nodes. In terms of data size, Spark has been shown to work well up to petabytes. It has been used to sort 100 TB of data 3X faster than Hadoop MapReduce (which sorted 100 TB of data in 23 min, using 2100 machines) using 10X fever machines, winning the 2014 Daytona GraySort Benchmark, as well as to sort 1 PB. Several production workloads use Spark to do ETL and data analysis on PBs of data. Below are some examples where Spark is used across industries:

Where is Spark usually used

AsiaInfo: Uses Spark Core, Streaming, MLlib and Graphx and Hadoop to build cost-effective data centre solution for our customers in the telecom industry as well as other industrial sectors.

Atp: Predictive models and learning algorithms to improve the relevance of programmatic marketing.

Credit Karma: Creates personalized experiences using Spark

eBay Inc: Using Spark core for log transaction aggregation and analytics

Kelkoo: Using Spark Core, SQL, and Streaming. Product recommendations, BI and analytics, real-time malicious activity filtering, and data mining.

More examples can be found on Apache’s  Powered By page

Spark Example in Scala (Spark shell can be used for this)

// “sc” is a “Spark context” – this transforms the file into an RDD
val textFile = sc.textFile("data.txt")
// Return number of items (lines) in this RDD; count() is an action
textFile.count()
// Demo filtering.  Filter is a transform.  By itself this does no real work
val linesWithSpark = textFile.filter(line => line.contains("Spark"))
// Demo chaining – how many lines contain “Spark”?  count() is an action.
textFile.filter(line => line.contains("Spark")).count()
// Length of line with most words.  Reduce is an action.
textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b)
// Word count – traditional map-reduce.  collect() is an action
val word Counts = text File.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b)
word Counts.collect()

Sample Spark Transformations

map(func): Return a new distributed dataset formed by passing each element of the source through a function func.

filter(func): Return a new dataset formed by selecting those elements of the source on which func returns true

union(other Dataset): Return a new dataset that contains the union of the elements in the source dataset and the argument.

Sample Spark Actions

reduce(func): Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel.

collect(): Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.

count(): Return the number of elements in the dataset.

The data is referred from the RDD Programming guide.

What is MapReduce?

MapReduce is a programming model for processing and generating large data sets with a parallel, distributed algorithm on a cluster.

Programmers have been writing parallel programs for a long time in different languages like C++, Java, C#, Python. But, they have their own nuances and maintaining these, is the programmer's responsibility. There are chances of application crashing, performance hit, incorrect results. Also, such systems if grows very large is not very fault tolerant or difficult to maintain.

MapReduce has simplified all these. Fault tolerance, parallel execution, resources management is all responsibility of the Resource manager and the framework. Programmers have to only concentrate on business logic by writing only map and reduce functions.

Brief Description of MapReduce Architecture

A MapReduce application has broadly two functions called map and reduce.

Map: Mapper process takes input as key/value pair, processes them i.e. performs some computation and then produces intermediate results as key/value pairs

i.e. map(k1,v1) ---> list(k2,v2)

Reduce: Reducer process receives an intermediate key and a set of values in sorted order. It processes these and generates output key/value pairs by grouping values for each key.

i.e. reduce(k2, list(v2)) ---> list(v3)

Can also define an option function “Combiner” (to optimize bandwidth)

If defined, runs after Mapper & before Reducer on every node that has run a map task

Combiner receives as input all data emitted by the Mapper instances on a given node

Combiner output sent to the Reducers, instead of the output from the Mappers

Is a "mini-reduce" process which operates only on data generated by one machine

How does MapReduce work?

MapReduce is usually applied to huge datasets. A MapReduce job splits the input data into smaller independent chunks called partitions and then processes them independently using map tasks and reduce tasks. Below is an example.

MapReduce Word Count (Pseudocode)

map(String input_key, String input_value):
// input_key: document name
// input_value: document contents
for each word w in input_value:
EmitIntermediate(w, "1");
reduce(String output_key, Iterator intermediate_values):
// output_key: a word
// output_values: a list of counts
int result = 0;
for each v in intermediate_values:
result += ParseInt(v);
Emit(AsString(result));

Mapreduce Framework

Map
Reduce

Apply a function to all the elements of List

list1=[1,2,3,4,5];
square x = x * x
list2=Map square(list1)
print list2
-> [1,4,9,16,25]0

Combine all the elements of list for a summary

list1 = [1,2,3,4,5];
A = reduce (+) list1
Print A
-> 15

Apache Spark vs MapReduce

After getting off hangover how Apache Spark and MapReduce works, we need to understand how these two technologies compare with each other, what are their pros and cons, so as to get a clear understanding which technology fits our use case.

Apache Spark vs Mapreduce

As we can see, MapReduce involves at least 4 disk operations whereas Spark only involves 2 disk operations. This is one reason for Spark to be much faster than MapReduce. Spark also caches intermediate data which can be used in further iterations helping Spark improve its performance further. The more iterative the process the better is the Spark performance due to in-memory processing and caching. This is where MapReduce performance not as good as Spark due to disk read/write operations for every iteration.

Let’s see a comparison between Spark and MapReduce on different other parameters to understand where to use Spark and where to use MapReduce

Attributes

MapReduce

Apache Spark

Speed/Performance


MapReduce is designed for batch processing and is not as fast as Spark. It is used for gathering data from multiple sources and process it once and store in a distributed data store like HDFS. It is best suited where memory is limited and processing data size is so big that it would not fit in the available memory.

Spark is 10-100 times faster because of in-memory processing and its caching mechanism. It can deliver near real-time analytics. It is used in Credit Card Processing, Fraud detection, Machine learning and data analytics, IoT sensors etc

Cost


As it is part of Apache Open Source there is no software cost.

Hardware cost is less in MapReduce as it works with smaller memory(RAM) as compared to Spark. Even commodity hardware is sufficient.

Spark also is Apache Open Source so no license cost.

Hardware cost is more than MapReduce as even though Spark can work on commodity hardware it needs a lot more memory(RAM) as compared to MapReduce since it should be able to fit all the data in Memory for optimal performance. Cluster needs little high-end commodity hardware with lots of RAM else performance gets hit

Ease of Use


MapReduce is a bit complex to write. MapReduce is written in Java and the APIs are a bit complex to code for new programmers, so there is a steep learning curve involved. The Pig has SQL like syntax and it is easier for SQL developers to get onboard easily. Also, there is no interactive mode available in MapReduce

Spark has APIs in Scala, Java, Python, R for all basic transformations and actions. It also has rich Spark SQL APIs for SQL savvy developers and it covers most of the SQL functions and is adding more functions with each new release. Also, Spark has scope for writing User Defined Analytical Functions and Functions (UDF/UDAF) for anyone who would like to have custom functions.

Compatibility


MapReduce is also compatible with all data sources and file formats Hadoop supports. But MapReduce needs another Scheduler like YARN or Mesos to run, it does not have any inbuilt Scheduler like Spark’s default/standalone scheduler.

Apache Spark can in standalone mode using default scheduler. It can also run on YARN or Mesos. It can run on-premise or on the cloud. Spark supports most of the data formats like parquet, Avro, ORC, JSON, etc. It also supports multiple languages and has APIs for Java, Scala, Python, R.

Data Processing


MapReduce can only be used for batch processing where throughput is more important and latency can be compromised.

Spark supports Batch as well as Stream processing, so fits both use cases and can be used for Lambda design where applications need both Speed layer and slower layer/data processing layer

Security


MapReduce has more security features.MapReduce can enjoy all the Hadoop security benefits and integrate with Hadoop security projects, like Knox Gateway and Sentry.

Spark is a bit bare at the moment. Spark currently supports authentication via a shared secret. Spark can integrate with HDFS and it can use HDFS ACLs and file-level permissions. Spark can also run on YARN leveraging the capability of Kerberos.

Fault Tolerance


MapReduce uses replication for fault tolerance. If any slave daemon fails, master daemons reschedule all pending and in-progress operations to another slave. This method is effective, but it can significantly increase the completion times for operations with a single failure also

In Spark, RDDs are the building blocks and Spark also uses it RDDs and DAG for fault tolerance. If an RDD is lost, it will automatically be recomputed by using the original transformations.

Latency

MapReduce has high latency

Spark provides low latency performance

Interactive Mode

MapReduce does not have any interactive mode of operation.

Spark can be used interactively also for data processing. It has out-of-the-box support for spark shell for scala/python/R

Machine Learning/Graph Processing

No support for these. A mahout has to be used for ML

Spark has dedicated modules for ML and Graph processing

Both these technologies MapReduce and Spark have pros and cons:

MapReduce is best suited for Analysis of archived data where data size is huge and it is not going to fit in memory, and if the instant results and intermediate solutions are not required. MapReduce also scales very well and the cluster can be horizontally scaled with ease using commodity machines.

Offline Analytics is a good fit for MapReduce like Top Products per month, Unique clicks per banner.

MapReduce is also suited for Web Crawling as well as Crawling tweets at scale and NLP like Sentiment Analysis.

Another use case for MapReduce is de-duplicating data from social networking sites, job sites and other similar sites.

MapReduce is also heavily used in Data mining for Generating the model and then classifying.

Spark is fast and so can be used in Near Real Time data analysis.

A lot of organizations are moving to Spark as their ETL processing layer from legacy ETL systems like Informatica. Spark as very good and optimized SQL processing module which fits the ETL requirements as it can read from multiple sources and can also write to many kinds of data sources.

Spark can also handle Streaming data so its best suited for Lambda design.

Most graph processing algorithms like page rank perform multiple iterations over the same data and this requires a message passing mechanism. Spark has great support for Graph processing using GraphX module.

Almost all machine learning algorithms work iteratively. Spark has a built-in scalable machine learning library called MLlib which contains high-quality algorithms that leverage iterations and yields better results than one pass approximations sometimes used on MapReduce.

Hadoop MapReduce is more mature as it has been there for a longer time and its support is also better in the open source community. It can be beneficial for really big data use case where memory is limited and data will not fit the RAM. Most of the time, Spark use case will involve Hadoop and other tools like Hive, Pig, Impala and so when these technologies complement each other it will be a win for both Spark and MapReduce.

Conclusion:

Hadoop Mapreduce is more mature as it has been there for a longer time and its support is also better in the open source community. It can be beneficial for really big data use case where memory is limited and data will not fit the RAM. Most of the time, Spark use case will involve Hadoop and other tools like Hive, Pig, Impala and so when these technologies complement each other it will be a win for both Spark and Mapreduce.

Nitin

Nitin Kumar

Blog Author

I am an Alumni of IIT(ISM) Dhanbad. I have 15+ years of experience in Software industry working with Investment Banking and Financial Services domain. I have worked for Wall Street banks like Morgan Stanley and JP Morgan Chase. I have been working on Big Data technologies like hadoop,spark, cloudera for 3+ years.

Join the Discussion

Your email address will not be published. Required fields are marked *

1 comments

Antonio 21 Jun 2019

I am searching for this type of article,finally i found it from your site,thanks for the article.

Suggested Blogs

Apache Spark Pros and Cons

Apache Spark:  The New ‘King’ of Big DataApache Spark is a lightning-fast unified analytics engine for big data and machine learning. It is the largest open-source project in data processing. Since its release, it has met the enterprise’s expectations in a better way in regards to querying, data processing and moreover generating analytics reports in a better and faster way. Internet substations like Yahoo, Netflix, and eBay, etc have used Spark at large scale. Apache Spark is considered as the future of Big Data Platform.Pros and Cons of Apache SparkApache SparkAdvantagesDisadvantagesSpeedNo automatic optimization processEase of UseFile Management SystemAdvanced AnalyticsFewer AlgorithmsDynamic in NatureSmall Files IssueMultilingualWindow CriteriaApache Spark is powerfulDoesn’t suit for a multi-user environmentIncreased access to Big data-Demand for Spark Developers-Apache Spark has transformed the world of Big Data. It is the most active big data tool reshaping the big data market. This open-source distributed computing platform offers more powerful advantages than any other proprietary solutions. The diverse advantages of Apache Spark make it a very attractive big data framework. Apache Spark has huge potential to contribute to the big data-related business in the industry. Let’s now have a look at some of the common benefits of Apache Spark:Benefits of Apache Spark:SpeedEase of UseAdvanced AnalyticsDynamic in NatureMultilingualApache Spark is powerfulIncreased access to Big dataDemand for Spark DevelopersOpen-source community1. Speed:When comes to Big Data, processing speed always matters. Apache Spark is wildly popular with data scientists because of its speed. Spark is 100x faster than Hadoop for large scale data processing. Apache Spark uses in-memory(RAM) computing system whereas Hadoop uses local memory space to store data. Spark can handle multiple petabytes of clustered data of more than 8000 nodes at a time. 2. Ease of Use:Apache Spark carries easy-to-use APIs for operating on large datasets. It offers over 80 high-level operators that make it easy to build parallel apps.The below pictorial representation will help you understand the importance of Apache Spark.3. Advanced Analytics:Spark not only supports ‘MAP’ and ‘reduce’. It also supports Machine learning (ML), Graph algorithms, Streaming data, SQL queries, etc.4. Dynamic in Nature:With Apache Spark, you can easily develop parallel applications. Spark offers you over 80 high-level operators.5. Multilingual:Apache Spark supports many languages for code writing such as Python, Java, Scala, etc.6. Apache Spark is powerful:Apache Spark can handle many analytics challenges because of its low-latency in-memory data processing capability. It has well-built libraries for graph analytics algorithms and machine learning.7. Increased access to Big data:Apache Spark is opening up various opportunities for big data and making As per the recent survey conducted by IBM’s announced that it will educate more than 1 million data engineers and data scientists on Apache Spark. 8. Demand for Spark Developers:Apache Spark not only benefits your organization but you as well. Spark developers are so in-demand that companies offering attractive benefits and providing flexible work timings just to hire experts skilled in Apache Spark. As per PayScale the average salary for  Data Engineer with Apache Spark skills is $100,362. For people who want to make a career in the big data, technology can learn Apache Spark. You will find various ways to bridge the skills gap for getting data-related jobs, but the best way is to take formal training which will provide you hands-on work experience and also learn through hands-on projects.9. Open-source community:The best thing about Apache Spark is, it has a massive Open-source community behind it. Apache Spark is Great, but it’s not perfect - How?Apache Spark is a lightning-fast cluster computer computing technology designed for fast computation and also being widely used by industries. But on the other side, it also has some ugly aspects. Here are some challenges related to Apache Spark that developers face when working on Big data with Apache Spark.Let’s read out the following limitations of Apache Spark in detail so that you can make an informed decision whether this platform will be the right choice for your upcoming big data project.No automatic optimization processFile Management SystemFewer AlgorithmsSmall Files IssueWindow CriteriaDoesn’t suit for a multi-user environment1. No automatic optimization process:In the case of Apache Spark, you need to optimize the code manually since it doesn’t have any automatic code optimization process. This will turn into a disadvantage when all the other technologies and platforms are moving towards automation.2. File Management System:Apache Spark doesn’t come with its own file management system. It depends on some other platforms like Hadoop or other cloud-based platforms.3. Fewer Algorithms:There are fewer algorithms present in the case of Apache Spark Machine Learning Spark MLlib. It lags behind in terms of a number of available algorithms.4. Small Files Issue:One more reason to blame Apache Spark is the issue with small files. Developers come across issues of small files when using Apache Spark along with Hadoop. Hadoop Distributed File System (HDFS) provides a limited number of large files instead of a large number of small files.5. Window Criteria:Data in Apache Spark divides into small batches of a predefined time interval. So Apache won't support record-based window criteria. Rather, it offers time-based window criteria.6. Doesn’t suit for a multi-user environment:Yes, Apache Spark doesn’t fit for a multi-user environment. It is not capable of handling more users concurrency.Conclusion:To sum up, in light of the good, the bad and the ugly, Spark is a conquering tool when we view it from outside. We have seen a drastic change in the performance and decrease in the failures across various projects executed in Spark. Many applications are being moved to Spark for the efficiency it offers to developers. Using Apache Spark can give any business a boost and help foster its growth. It is sure that you will also have a bright future!
Rated 4.5/5 based on 19 customer reviews
8597
Apache Spark Pros and Cons

Apache Spark:  The New ‘King’ of Big DataApac... Read More

Fundamentals of Apache Spark

IntroductionBefore getting into the fundamentals of Apache Spark, let’s understand What really is ‘Apache Spark’ is? Following is the authentic one-liner definition.Apache Spark is a fast and general-purpose, cluster computing system.One would find multiple definitions when you search the term Apache Spark. All of those give similar gist, just different words. Let’s understand these special keywords which describe Apache Spark. Fast: As spark uses in-memory computing it’s fast. It can run queries 100x faster. We will get to details of architecture later to understand this aspect better little later in the article. One would find the keywords ‘Fast’ and/or ‘In-memory’ in all the definitions. General Purpose: Apache spark is a unified framework. It provides one execution model for all tasks and hence very easy for developers to learn and they can work with multiple APIs easily. Spark offers over 80 high-level operators that make it easy to build parallel apps and one can use it interactively from the Scala, Python, R, and SQL shells.Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.Cluster Computing: Efficient processing of data on Set of computers (Refer commodity hardware here) or distributed systems. It’s also called a Parallel Data processing Engine in a few definitions. Spark is utilized for Big data analytics and related processing. One more important keyword associated with Spark is Open Source. It was open-sourced in 2010 under a   BSD license.Spark (and its RDD) was developed(earliest version as it’s seen today), in 2012, in response to limitations in the   MapReduce cluster computing paradigm. Spark is commonly seen as an in-memory replacement of MapReduce.Since its release, Apache Spark has seen rapid adoption due to its characteristics briefly discussed above.Who should go for Apache SparkBefore trying to find out whether Apache spark is for me? Or whether I have the right skill set, It's important to focus on the generality characteristic in further depth.Apache Spark consists of Spark Core and a set of libraries. The core is the distributed execution engine and the Java, Scala, and Python APIs offer a platform for distributed ETL application development. Additional libraries, built atop the core, allow diverse workloads for streaming, SQL, and machine learning.As Spark provides these multiple components, it’s evident that Spark is developed and widely utilized for big data and analytics.  Professionals who should learn Apache SparkIf one is aspiring to be landed into the following professions or anyone who has an interest in data and insights, Knowledge of spark will prove useful:Data ScientistsData EngineersPrerequisites of learning Apache SparkMost of the students looking for big data training, Apache spark is number one framework in big data. So most of the knowledge seekers looking for spark training, it is important to note that there are few prerequisites to learn apache spark.Before getting into Big data, you must have minimum knowledge on:Anyone of the programming languages >> Core   Python or Scala.Spark installations can be done on any platform but its framework is similar to Hadoop and hence having knowledge of HDFS and YARN is highly recommended. Having knowledge of Hive is an added advantage but is not mandatory.Basic knowledge of SQL. In SQL mainly select * from, joins and group by these three commands highly recommended.Optionally, knowing any cloud technology like AWS. Recommended for those who want to work with production-like environments.System requirements of Apache SparkOfficial site for  Apache Spark gives following recommendation (Traverse link for further details)Storage System: There are few ways to set this up as follows: Spark can run on the same node as HDFS. Spark standalone node cluster can be installed on the same nodes and configure Spark and Hadoop memory and CPU usage accordingly to avoid any interference.Or,1. Hadoop and Spark can execute on common Resource Manager ( Ex. Yarn etc)Or,2. Spark will be executing in same Local Area Network as HDFS but on separate nodes.Or3. If a requirement is a quick response and low latency from data stores then execute compute jobs on separate nodes than that of storage nodes.Local Disks: Typically 4-8 disks per node, configured without RAID.If underline OS is Linux then mount the disk with noatime option and in Spark environment configure spark.local.dir variable to be a comma-separated list of local disks.Note: For HDFS, it can be the same disk as HDFS.Memory: Minimum 8GB - 100s of GBs of memory per machine.A recommendation is the allocation of 75% of the memory to Spark.Network: 10GB or faster speed network.CPU cores: 8-16 Cores per machineHowever, for Training and Learning purpose and just to taste Spark, following are two available options: Run it locally Use AWS EMR (Or any cloud computing service)For learning purposes, minimum 4gb ram system with minimum 30gb disk may prove enough.History of Apache SparkSpark was primarily developed to Overcome the Limitations of MapReduce.Versioning: Spark initial version was version 0, version 1.6 is assumed to be a stable version and is being used in multiple commercial corporate projects. Version 2.3 is the latest available version. MapReduce is cluster computing  paradigm, which forces a particular linear  data flow structure on distributed programs: MapReduce programs read input data from disk,  map a function across the data,  reduce the results of the map, and store reduction results on disk. Due to multiple copies of data and multiple I/O as described, MapReduce takes lots of time to process the volume of data. MapReduce can do only batch time processing and is unsuitable for real-time data processingIt is unsuitable for trivial join like transformations. It’s unfit for large data on a network and also with OLTP data.Also, not suitable for graphics and interactive data.Spark overcomes all these limitations and able to do faster processing too on the local disk as well.Why Apache Spark?Numerous advantages of Spark have made its a market favorite.Let’s discuss one by one.Speed: Extends MapReduce Model to support computations like stream processing and interactive queries.Single Combination for processes and multiple tools:  Covers multiple workloads ( in a traditional system, it used to require different distributed systems), which makes combining different processing types and ease of tool management.Unification: Developers have to learn only one platform unlike multiple languages and tools in a traditional system.Support to different Resource Managers: Spark supports Hadoop HDFS system, and YARN for resource management but it’s not the only resource manager it supports. It works on MESOS and on any standalone scheduler like spark resource manager.Support for cutting-edge Innovation: Spark provides capabilities and support for an array of new-age technologies ranging from built-in machine learning libraries,   visualization tools, support for near processing (which was in a way the biggest challenge pre- spark era) and supports seamless integration with other deep learning frameworks like TensorFlow. This enables Spark to provide an innovative solution for new age use-cases.Spark can access diverse data sources and make sense of them all and hence it’s trending in the market over any other cluster computing software available. Who uses Apache SparkListing a few use cases of Apache spark below :1. Analytics - Spark can be very useful when building real-time analytics from a stream of incoming data.2. E-commerce - Information about the real-time transaction can be passed to streaming clustering algorithms like alternating least squares or K-means clustering algorithm. The results can be combined with data from other sources like social media profiles, product reviews on forums, customer comments, etc. to enhance the recommendations to customers based on new trends.Shopify: At Shopify, we underwrite credit card transactions, exposing us to the risk of losing money. We need to respond to risky events as they happen, and a traditional ETL pipeline just isn’t fast enough. Spark Streaming is an incredibly powerful real-time data processing framework based on Apache Spark. It allows you to process real-time streams like Apache Kafka using Python with incredible simplicity.Alibaba: Alibaba Taobao operates one of the world’s largest e-commerce platforms. We collect hundreds of petabytes of data on this platform and use Apache Spark to analyze these enormous amounts of data.3. Healthcare Industry –Healthcare has multiple use-cases of unstructured data to be processed in real-time. It has data ranging from image formats like scans etc to specific medical industry standards and wearable tracking devices. Many healthcare providers are keen on using spark for patient’s records to build 360 degrees view of the patient to do accurate diagnosis.MyFitnessPal: MyFitnessPal needed to deliver a new feature called “Verified Foods.” The feature demanded a faster pipeline to execute a number of highly sophisticated algorithms. Their legacy non-distributed Java-based data pipeline was slow, did not scale, and lacked flexibility.Here are a few other examples from industry leaders:Regeneron: Future of Drug Discovery with Genomics at Scale powered by SparkZeiss: Using Spark Structured Streaming for Predictive MaintenanceDevon Energy: Scaling Geographic Analytics with Spark GraphXYou can also learn more about use cases of Apache Spark  here.Career Benefits:Career Benefits of Spark for you as an individual:Apache Spark developers earn the highest average salary among all other programmers. According to its  2015 Data Science Salary Survey, O’Reilly found strong correlations between those who used Apache Spark and those who were paid more money. In one of its models, using Spark added more than $11,000 to the median salary.If you’re considering switching to this extremely in-demand career then taking up the  Apache Spark training will be an added advantage. Learning Spark will give you a steep competitive edge and can land you up in market best-paying jobs with top companies. Spark has gained enough adherents over the years to place it high on the list of fastest-growing skills; data scientists and sysadmins have evaluated the technology and clearly seen what they liked.  April’s Dice Report explored the fastest-growing technology skills, based on an analysis of job postings and data from Dice’s annual salary survey. The results are below; percentages are based on year-over-year growth in job postings:Benefits of Spark implementing Spark in your organization:Apache spark is now a decade older but still going strong. Due to lightning-fast processing and numerous other advantages discussed so far, Spark is still the first choice of many organizations.Spark is considered to be the most popular open-source project on the planet, with more than 1,000 contributors from 250-plus organizations, according to Databricks.ConclusionTo sum up, Spark helps to simplify the computationally intensive task of processing high volumes of real-time or batch data. It can seamlessly integrate with complex capabilities such as machine learning and graph algorithms. In short, Spark brings exclusive Big Data processing (which earlier was only for giant companies like Google) to the masses.Do let us know how your learning experience was, through comments below.Happy Learning!!!
Rated 4.5/5 based on 13 customer reviews
9785
Fundamentals of Apache Spark

IntroductionBefore getting into the fundamentals o... Read More

How Big Data Can Help You Understand Your Customers and Grow Your Business

What’s the main purpose of a marketing campaign for any business? You’re trying to convince the customers you offer exactly what they need. What do you do to get there? You find out what they need. This is where big data gets into the picture. Big data is a general term for all information that allows you to understand the purchasing decisions of your target consumers That’s not all. Big data also helps you create a sustainable budget, find the best way to manage your business, beat the competition, and create higher revenue. In essence, big data is all information that helps you grow your brand. The process of analyzing and successfully using that data is called big data analytics. Now that we got the definition out of the way, let’s get practical. We’ll help you realize how you can use big data to understand the behavior of your customers and grow your brand. Where Can You Find Big Data? This is the big question about big data: where do you find it? When you’re looking for data that you could immediately turn into useful information, you should start with the historical data of your business. This includes all information for your business you collected since it was formed. The earnings, revenues, stock price action… everything you have. That data is already available to you. You can use it to understand how your business worked under different circumstances. The US Census Bureau holds an enormous amount of data regarding US citizens. You can use the information about the population economy, and products to understand the behavior of your target consumers. gov is another great website to explore. It gives you data related to consumers, ecosystems, education, finance, energy, public safety, health, agriculture, manufacturing, and few other categories. Explore the field relevant to your business and you’ll find data you can use. This information is for US citizens. If you need a similar tool for the EU, you can explore the European Union Open Data Portal. Facebook’s Graph API gives you a huge amount of information about the users of the platform. How to Use Big Data to Your Brand’s Advantage Collecting big data is not that hard. Information is everywhere. However, the huge volume of information you collect might confuse you. For now, you might want to focus on the historical data for your business. That should be enough for you to understand the behavior of your customers. When you understand how the analytics work, you can start comparing your historical data with the information you get from governmental and social media sources. These are the main questions to ask when analyzing big data: The average amount your customers spend on a typical purchase. This information helps you understand their budget and spending habits. Did they spend more money on an average purchase when they used promotions? What’s the situation with conversion? How many of the social media followers follow a link and become actual customers? These rates help you determine the effect of your marketing campaign. When you understand it, you’ll be able to improve it. How many new customers did you attract through promotions? Did those activities help you increase the awareness for your brand? How much have you spent on marketing and sales to attract a single customer? Divide the total amount of expenses for promotional activities with the number of customers you attracted while the campaign lasted. You’ll get the acquisition cost of a single customer. If it’s too large, you’ll need to restructure your promotional activities. Compare historical data to identify the campaigns that were most and least successful in this aspect. What do your customers require in order to stay loyal to your brand? Do they ask for more support or communication? How satisfied are your customers with the products or services you offer? What’s the difference between the categories of happy and unhappy customers? When you determine the factors that make your customers happy, you’ll be able to expand on them. When you identify the things that lead to dissatisfaction, you’ll work on them. Every Business Benefits from Big Data You feel like you have to own a huge business to get interested about big data? That’s a misconception. It doesn’t matter how big your company is. You still have tons of data to analyze, and you can definitely benefit from it & bigdata solve problems easily Collect all data described above and compare it with the way your customers behaved in the past. Are you growing? If yes, why? If not, why? The key to understanding the behavior of your customers is to give this information a human face. Connect the numbers with the habits and spending behavior of your real customers. When you relate the data to actual human experience, you’ll be able to develop customers personas. You’ll increase the level of satisfaction your consumers get. When you do that, the growth of your business will be inevitable.
Rated 4.0/5 based on 20 customer reviews
How Big Data Can Help You Understand Your Customer...

What’s the main purpose of a marketing campaign ... Read More

20% Discount