Search

How Big Data Can Help You Understand Your Customers and Grow Your Business

What’s the main purpose of a marketing campaign for any business? You’re trying to convince the customers you offer exactly what they need. What do you do to get there? You find out what they need. This is where big data gets into the picture. Big data is a general term for all information that allows you to understand the purchasing decisions of your target consumers That’s not all. Big data also helps you create a sustainable budget, find the best way to manage your business, beat the competition, and create higher revenue. In essence, big data is all information that helps you grow your brand. The process of analyzing and successfully using that data is called big data analytics. Now that we got the definition out of the way, let’s get practical. We’ll help you realize how you can use big data to understand the behavior of your customers and grow your brand. Where Can You Find Big Data? This is the big question about big data: where do you find it? When you’re looking for data that you could immediately turn into useful information, you should start with the historical data of your business. This includes all information for your business you collected since it was formed. The earnings, revenues, stock price action… everything you have. That data is already available to you. You can use it to understand how your business worked under different circumstances. The US Census Bureau holds an enormous amount of data regarding US citizens. You can use the information about the population economy, and products to understand the behavior of your target consumers. gov is another great website to explore. It gives you data related to consumers, ecosystems, education, finance, energy, public safety, health, agriculture, manufacturing, and few other categories. Explore the field relevant to your business and you’ll find data you can use. This information is for US citizens. If you need a similar tool for the EU, you can explore the European Union Open Data Portal. Facebook’s Graph API gives you a huge amount of information about the users of the platform. How to Use Big Data to Your Brand’s Advantage Collecting big data is not that hard. Information is everywhere. However, the huge volume of information you collect might confuse you. For now, you might want to focus on the historical data for your business. That should be enough for you to understand the behavior of your customers. When you understand how the analytics work, you can start comparing your historical data with the information you get from governmental and social media sources. These are the main questions to ask when analyzing big data: The average amount your customers spend on a typical purchase. This information helps you understand their budget and spending habits. Did they spend more money on an average purchase when they used promotions? What’s the situation with conversion? How many of the social media followers follow a link and become actual customers? These rates help you determine the effect of your marketing campaign. When you understand it, you’ll be able to improve it. How many new customers did you attract through promotions? Did those activities help you increase the awareness for your brand? How much have you spent on marketing and sales to attract a single customer? Divide the total amount of expenses for promotional activities with the number of customers you attracted while the campaign lasted. You’ll get the acquisition cost of a single customer. If it’s too large, you’ll need to restructure your promotional activities. Compare historical data to identify the campaigns that were most and least successful in this aspect. What do your customers require in order to stay loyal to your brand? Do they ask for more support or communication? How satisfied are your customers with the products or services you offer? What’s the difference between the categories of happy and unhappy customers? When you determine the factors that make your customers happy, you’ll be able to expand on them. When you identify the things that lead to dissatisfaction, you’ll work on them. Every Business Benefits from Big Data You feel like you have to own a huge business to get interested about big data? That’s a misconception. It doesn’t matter how big your company is. You still have tons of data to analyze, and you can definitely benefit from it & bigdata solve problems easily Collect all data described above and compare it with the way your customers behaved in the past. Are you growing? If yes, why? If not, why? The key to understanding the behavior of your customers is to give this information a human face. Connect the numbers with the habits and spending behavior of your real customers. When you relate the data to actual human experience, you’ll be able to develop customers personas. You’ll increase the level of satisfaction your consumers get. When you do that, the growth of your business will be inevitable.
Rated 4.0/5 based on 20 customer reviews

How Big Data Can Help You Understand Your Customers and Grow Your Business

353
How Big Data Can Help You Understand Your Customers and Grow Your Business

What’s the main purpose of a marketing campaign for any business? You’re trying to convince the customers you offer exactly what they need. What do you do to get there? You find out what they need. This is where big data gets into the picture.

Big data is a general term for all information that allows you to understand the purchasing decisions of your target consumers That’s not all. Big data also helps you create a sustainable budget, find the best way to manage your business, beat the competition, and create higher revenue. In essence, big data is all information that helps you grow your brand. The process of analyzing and successfully using that data is called big data analytics.

Now that we got the definition out of the way, let’s get practical. We’ll help you realize how you can use big data to understand the behavior of your customers and grow your brand.

Where Can You Find Big Data?

This is the big question about big data: where do you find it?

  1. When you’re looking for data that you could immediately turn into useful information, you should start with the historical data of your business. This includes all information for your business you collected since it was formed. The earnings, revenues, stock price action… everything you have. That data is already available to you. You can use it to understand how your business worked under different circumstances.
  2. The US Census Bureau holds an enormous amount of data regarding US citizens. You can use the information about the population economy, and products to understand the behavior of your target consumers.
  3. gov is another great website to explore. It gives you data related to consumers, ecosystems, education, finance, energy, public safety, health, agriculture, manufacturing, and few other categories. Explore the field relevant to your business and you’ll find data you can use. This information is for US citizens. If you need a similar tool for the EU, you can explore the European Union Open Data Portal.
  4. Facebook’s Graph API gives you a huge amount of information about the users of the platform.

How to Use Big Data to Your Brand’s Advantage

Collecting big data is not that hard. Information is everywhere. However, the huge volume of information you collect might confuse you. For now, you might want to focus on the historical data for your business. That should be enough for you to understand the behavior of your customers. When you understand how the analytics work, you can start comparing your historical data with the information you get from governmental and social media sources.

These are the main questions to ask when analyzing big data:

  • The average amount your customers spend on a typical purchase. This information helps you understand their budget and spending habits. Did they spend more money on an average purchase when they used promotions?
  • What’s the situation with conversion? How many of the social media followers follow a link and become actual customers? These rates help you determine the effect of your marketing campaign. When you understand it, you’ll be able to improve it.
  • How many new customers did you attract through promotions? Did those activities help you increase the awareness for your brand?
  • How much have you spent on marketing and sales to attract a single customer? Divide the total amount of expenses for promotional activities with the number of customers you attracted while the campaign lasted. You’ll get the acquisition cost of a single customer. If it’s too large, you’ll need to restructure your promotional activities. Compare historical data to identify the campaigns that were most and least successful in this aspect.
  • What do your customers require in order to stay loyal to your brand? Do they ask for more support or communication?
  • How satisfied are your customers with the products or services you offer? What’s the difference between the categories of happy and unhappy customers? When you determine the factors that make your customers happy, you’ll be able to expand on them. When you identify the things that lead to dissatisfaction, you’ll work on them.

Every Business Benefits from Big Data

You feel like you have to own a huge business to get interested about big data? That’s a misconception. It doesn’t matter how big your company is. You still have tons of data to analyze, and you can definitely benefit from it & bigdata solve problems easily

Collect all data described above and compare it with the way your customers behaved in the past. Are you growing? If yes, why? If not, why?

The key to understanding the behavior of your customers is to give this information a human face. Connect the numbers with the habits and spending behavior of your real customers. When you relate the data to actual human experience, you’ll be able to develop customers personas. You’ll increase the level of satisfaction your consumers get. When you do that, the growth of your business will be inevitable.

Robert

Robert Morris

Blog Author

Robert Morris is a freelance writer. He writes articles on business, career, marketing, technology and currently is working as a blog editor at the educational blog <a href="http://askpetersen.com" />askpetersen.com</a>.

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

Types Of Big Data

Big Data is creating a revolution in the IT field, every year the use of analytics is increasing drastically every year. We are creating 2.5 quintillion bytes of data every day hence the field is expanding in B2C apps. Big Data has entered almost every industry today and is a dominant driving force behind the success of enterprises and organizations across the Globe. Let us first discuss- “What is Big Data?” “Data” is defined as ‘the quantities, characters, or symbols on which operations are performed by a computer, which may be stored and transmitted in the form of electrical signals and recorded on magnetic, optical, or mechanical recording media’, as a quick google search will show. The concept of Big Data is nothing complex; as the name suggests, “Big Data” refers to copious amounts of data which are too large to be processed and analyzed by traditional tools, and the data is not stored or managed efficiently. Since the amount of Big Data increases exponentially- more than 500 terabytes of data are uploaded to Facebook alone, in a single day- it represents a real problem in terms of analysis. Types of Big Data: Classification is essential for the study of any subject. So Big Data is widely classified into three main types, which are- Structured Unstructured Semi-structured 1. Structured data Structured Data is used to refer to the data which is already stored in databases, in an ordered manner. It accounts for about 20% of the total existing data and is used the most in programming and computer-related activities. There are two sources of structured data- machines and humans. All the data received from sensors, weblogs, and financial systems are classified under machine-generated data. These include medical devices, GPS data, data of usage statistics captured by servers and applications and the huge amount of data that usually move through trading platforms, to name a few. Human-generated structured data mainly includes all the data a human input into a computer, such as his name and other personal details. When a person clicks a link on the internet, or even makes a move in a game, data is created- this can be used by companies to figure out their customer behavior and make the appropriate decisions and modifications. Let’s understand Structured data with an example. Top 3 players who have scored most runs in international T20 matches are as follows: Player Country Scores No of Matches played                Brendon McCullum New Zealand                                 2140                                           71                    Rohit Sharma India     2237          90 Virat Kohli  India      2167          65 2. Unstructured data While structured data resides in the traditional row-column databases, unstructured data is the opposite- they have no clear format in storage. The rest of the data created, about 80% of the total account for unstructured big data. Most of the data a person encounters belong to this category- and until recently, there was not much to do to it except storing it or analyzing it manually. Unstructured data is also classified based on its source, into machine-generated or human-generated. Machine-generated data accounts for all the satellite images, the scientific data from various experiments and radar data captured by various facets of technology. Human-generated unstructured data is found in abundance across the internet since it includes social media data, mobile data, and website content. This means that the pictures we upload to Facebook or Instagram handle, the videos we watch on YouTube and even the text messages we send all contribute to the gigantic heap that is unstructured data. Examples of unstructured data include text, video, audio, mobile activity, social media activity, satellite imagery, surveillance imagery – the list goes on and on. The following image will clearly help you to understand what exactly Unstructured data is The Unstructured data is further divided into – Captured User-Generated data a. Captured data: It is the data based on the user’s behavior. The best example to understand it is GPS via smartphones which help the user each and every moment and provides a real-time output. b. User-generated data: It is the kind of unstructured data where the user itself will put data on the internet every movement. For example, Tweets and Re-tweets, Likes, Shares, Comments, on Youtube, Facebook, etc. 3. Semi-structured data: The line between unstructured data and semi-structured data has always been unclear since most of the semi-structured data appear to be unstructured at a glance. Information that is not in the traditional database format as structured data, but contains some organizational properties which make it easier to process, are included in semi-structured data. For example, NoSQL documents are considered to be semi-structured, since they contain keywords that can be used to process the document easily. Big Data analysis has been found to have definite business value, as its analysis and processing can help a company achieve cost reductions and dramatic growth. So it is imperative that you do not wait too long to exploit the potential of this excellent business opportunity. Diagram showing Semi-structured data Difference between Structured, Semi-structured and Unstructured data       Factors      Structured data       Semi-structured data     Unstructured data Flexibility It is dependent and less flexible It is more flexible than structured data but less than flexible than unstructured data It is flexible in nature and there is an absence of a schema Transaction Management Matured transaction and various concurrency technique The transaction is adapted from DBMS not matured No transaction management and no concurrency Query performance Structured query allow complex joining Queries over anonymous nodes are possible An only textual query is possible Technology It is based on the relational database table It is based on RDF and XML This is based on character and library data Big data is indeed a revolution in the field of IT. The use of Data analytics is increasing every year. In spite of the demand, organizations are currently short of experts. To minimize this talent gap many training institutes are offering courses on Big data analytics which helps you to upgrade skills set needed to manage and analyze big data. If you are keen to take up data analytics as a career then taking up Big data training will be an added advantage .
Rated 4.0/5 based on 2 customer reviews
3411
Types Of Big Data

Big Data is creating a revolution in the IT field,... Read More

Installing Apache Spark on windows

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.In this document, we will cover the installation procedure of Apache Spark on Windows 10 operating systemPrerequisitesThis guide assumes that you are using Windows 10 and the user had admin permissions.Audience:This document can be referred by anyone who wants to install the latest version of Apache Spark on Windows 10.System requirements:Windows 10 OS4 GB RAM20 GB free spaceInstallation ProcedureStep 1: Go to the below official download page of Apache Spark and choose the latest release. For the package type, choose ‘Pre-built for Apache Hadoop’.The page will look like below.Step 2:  Once the download is completed unzip the file, to unzip the file using WinZip or WinRAR or 7-ZIP.Step 3: Create a folder called Spark under your user Directory like below and copy paste the content from the unzipped file.C:\Users\\SparkIt looks like below after copy-pasting into the Spark directory.Step 4: Go to the conf folder and open log file called, log4j.properties. template. Change INFO to WARN (It can be ERROR to reduce the log). This and next steps are optional.Remove. template so that Spark can read the file.Before removing. template all files look like below.After removing. template extension, files will look like belowStep 5: Now we need to configure path.Go to Control Panel -> System and Security -> System -> Advanced Settings -> Environment VariablesAdd below new user variable (or System variable) (To add new user variable click on New button under User variable for )Click OK.Add %SPARK_HOME%\bin to the path variable.Click OK.Step 6: Spark needs a piece of Hadoop to run. For Hadoop 2.7, you need to install winutils.exe.You can find winutils.exe from below pageDownload it.Step 7: Create a folder called winutils in C drive and create a folder called bin inside. Then, move the downloaded winutils file to the bin folder.C:\winutils\binAdd the user (or system) variable %HADOOP_HOME% like SPARK_HOME.Click OK.Step 8: To install Apache Spark, Java should be installed on your computer. If you don’t have java installed in your system. Please follow the below processJava Installation Steps:Go to the official Java site mentioned below  the page.Accept Licence Agreement for Java SE Development Kit 8u201Download jdk-8u201-windows-x64.exe fileDouble Click on Downloaded .exe file, you will the window shown below.Click Next.Then below window will be displayed.Click Next.Below window will be displayed after some process.Click Close.Test Java Installation:Open Command Line and type java -version, then it should display installed version of JavaYou should also check JAVA_HOME and path of %JAVA_HOME%\bin included in user variables (or system variables)1. In the end, the environment variables have 3 new paths (if you need to add Java path, otherwise SPARK_HOME and HADOOP_HOME).2. Create c:\tmp\hive directory. This step is not necessary for later versions of Spark. When you first start Spark, it creates the folder by itself. However, it is the best practice to create a folder.C:\tmp\hiveTest Installation:Open command line and type spark-shell, you get the result as below.We have completed spark installation on Windows system. Let’s create RDD and     Data frameWe create one RDD and Data frame then will end up.1. We can create RDD in 3 ways, we will use one way to create RDD.Define any list then parallelize it. It will create RDD. Below is code and copy paste it one by one on the command line.val list = Array(1,2,3,4,5) val rdd = sc.parallelize(list)Above will create RDD.2. Now we will create a Data frame from RDD. Follow the below steps to create Dataframe.import spark.implicits._ val df = rdd.toDF("id")Above code will create Dataframe with id as a column.To display the data in Dataframe use below command.Df.show()It will display the below output.How to uninstall Spark from Windows 10 System: Please follow below steps to uninstall spark on Windows 10.Remove below System/User variables from the system.SPARK_HOMEHADOOP_HOMETo remove System/User variables please follow below steps:Go to Control Panel -> System and Security -> System -> Advanced Settings -> Environment Variables, then find SPARK_HOME and HADOOP_HOME then select them, and press DELETE button.Find Path variable Edit -> Select %SPARK_HOME%\bin -> Press DELETE ButtonSelect % HADOOP_HOME%\bin -> Press DELETE Button -> OK ButtonOpen Command Prompt the type spark-shell then enter, now we get an error. Now we can confirm that Spark is successfully uninstalled from the System.
Rated 4.5/5 based on 1 customer reviews
8616
Installing Apache Spark on windows

Apache Spark is a fast and general-purpose cluster... Read More

Apache Spark Vs Hadoop MapReduce

Why we need Big Data frameworksBig data is primarily defined by the volume of a data set. Big data sets are generally huge – measuring tens of terabytes – and sometimes crossing the threshold of petabytes. It is surprising to know how much data is generated every minute. As estimated by DOMO:Over 2.5 quintillion bytes of data are created every single day, and it’s only going to grow from there. By 2020, it’s estimated that 1.7MB of data will be created every second for every person on earth.You can read DOMO's full report, including industry-specific breakdowns.To store and process even only a fraction of this amount of data, we need Big Data frameworks as the traditional Databases would not be able to store so much of data nor traditional processing systems would be able to process this data quickly. Here comes the frameworks like Apache Spark and MapReduce to our rescue and help us to get deep insights into this huge amount of structured, unstructured and semi-structured data and make more sense out of it.Market Demands for Spark and MapReduceApache Spark was originally developed in 2009 at UC Berkeley by the team who later founded Databricks. Since its launch Spark has seen rapid adoption and growth. Most of the cutting-edge technology organizations like Netflix, Apple, Facebook, Uber have massive Spark clusters for data processing and analytics. The demand for Spark is increasing at a very fast pace. According to marketanalysis.com report forecast, the global Apache Spark market will grow at a CAGR of 67% between 2019 and 2022. The global Spark market revenue is rapidly expanding and may grow up $4.2 billion by 2022, with a cumulative market valued at $9.2 billion (2019 – 2022).MapReduce has been there for a little longer after being developed in 2006 and gained industry acceptance during the initial years. But at last, 5 years or so with Apache Spark gaining more ground, demand for MapReduce as the processing engine has reduced. But, it cannot be said in black and white that MapReduce will be completely replaced by Apache Spark in the coming years. Both the technologies have their own pros and cons as we will see them below. One solution cannot fit at all the places, so MapReduce will have its own takers depending on the problem to be solved.Also, Spark and MapReduce do complement each other on many occasions.Both these technologies have made inroads in all walks of common man’s life. You name the industry and its there. Be it telecommunication, e-commerce, banking, insurance, healthcare, medicine, agriculture, biotechnology, etc.What is Spark?As per Apache, “Apache Spark is a unified analytics engine for large-scale data processing”. Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.Spark, instead of just “map” and “reduce” functions, defines a large set of operations called transformations and actions for the developers and which are ultimately transformed to map/reduce by the spark execution engine and these operations are arbitrarily combined for highly optimized performance.Spark is developed in Scala language and it can run on Hadoop in standalone mode using its own default resource manager as well as in Cluster mode using YARN or Mesos resource manager. It is not mandatory to use Hadoop for Spark, it can be used with S3 or Cassandra also. But, in the majority of the cases, Hadoop is the best fit as Spark’s data storage layer.Features of SparkSpeed: Spark enables applications running on Hadoop to run up to 100x faster in memory and up to 10x faster on disk. Spark achieves this by minimising disk read/write operations for intermediate results and storing in memory and perform disk operations only when essential. Spark achieves this using DAG, query optimizer and highly optimized physical execution engine.Fault Tolerance: Apache Spark achieves fault tolerance using spark abstraction layer called RDD (Resilient Distributed Datasets), which are designed to handle worker node failure.Lazy Evaluation: All the processing(transformations) on Spark RDD/Datasets are lazily evaluated, i.e. the output RDD/datasets are not available right away after transformation but will be available only when an action is performed.Dynamic nature: Spark offers over 80 high-level operators that make it easy to build parallel apps.Multiple Language Support: Spark provides multiple programming language support and you can use it interactively from the Scala, Python, R, and SQL shells.Reusability: Spark code can be used for batch-processing, joining streaming data against historical data as well as run ad-hoc queries on streaming state.Machine Learning: Apache Spark comes with out of the box support for machine learning called MLib which can be used for complex, predictive data analytics.Graph Processing: GraphX is Apache Spark's API for graphs and graph-parallel computation. You can view the same data as both graphs and collections, transform and join graphs with RDDs efficiently, and write custom iterative graph algorithms using the Pregel API.Real-Time Stream Processing: Spark Streaming brings Apache Spark's language-integrated API to stream processing, letting you write streaming jobs the same way you write batch jobs.Where is Spark usually used?Spark is used by 1000+ organizations in Production. Many of these organizations are known to run Spark clusters of 1000+ nodes. In terms of data size, Spark has been shown to work well up to petabytes. It has been used to sort 100 TB of data 3X faster than Hadoop MapReduce (which sorted 100 TB of data in 23 min, using 2100 machines) using 10X fever machines, winning the 2014 Daytona GraySort Benchmark, as well as to sort 1 PB. Several production workloads use Spark to do ETL and data analysis on PBs of data. Below are some examples where Spark is used across industries:AsiaInfo: Uses Spark Core, Streaming, MLlib and Graphx and Hadoop to build cost-effective data centre solution for our customers in the telecom industry as well as other industrial sectors.Atp: Predictive models and learning algorithms to improve the relevance of programmatic marketing.Credit Karma: Creates personalized experiences using SparkeBay Inc: Using Spark core for log transaction aggregation and analyticsKelkoo: Using Spark Core, SQL, and Streaming. Product recommendations, BI and analytics, real-time malicious activity filtering, and data mining.More examples can be found on Apache’s  Powered By pageSpark Example in Scala (Spark shell can be used for this)// “sc” is a “Spark context” – this transforms the file into an RDD val textFile = sc.textFile("data.txt") // Return number of items (lines) in this RDD; count() is an action textFile.count() // Demo filtering.  Filter is a transform.  By itself this does no real work val linesWithSpark = textFile.filter(line => line.contains("Spark")) // Demo chaining – how many lines contain “Spark”?  count() is an action. textFile.filter(line => line.contains("Spark")).count() // Length of line with most words.  Reduce is an action. textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b) // Word count – traditional map-reduce.  collect() is an action val word Counts = text File.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b) word Counts.collect()Sample Spark Transformationsmap(func): Return a new distributed dataset formed by passing each element of the source through a function func.filter(func): Return a new dataset formed by selecting those elements of the source on which func returns trueunion(other Dataset): Return a new dataset that contains the union of the elements in the source dataset and the argument.Sample Spark Actionsreduce(func): Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel.collect(): Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.count(): Return the number of elements in the dataset.The data is referred from the RDD Programming guide.What is MapReduce?MapReduce is a programming model for processing and generating large data sets with a parallel, distributed algorithm on a cluster.Programmers have been writing parallel programs for a long time in different languages like C++, Java, C#, Python. But, they have their own nuances and maintaining these, is the programmer's responsibility. There are chances of application crashing, performance hit, incorrect results. Also, such systems if grows very large is not very fault tolerant or difficult to maintain.MapReduce has simplified all these. Fault tolerance, parallel execution, resources management is all responsibility of the Resource manager and the framework. Programmers have to only concentrate on business logic by writing only map and reduce functions.Brief Description of MapReduce ArchitectureA MapReduce application has broadly two functions called map and reduce.Map: Mapper process takes input as key/value pair, processes them i.e. performs some computation and then produces intermediate results as key/value pairsi.e. map(k1,v1) ---> list(k2,v2)Reduce: Reducer process receives an intermediate key and a set of values in sorted order. It processes these and generates output key/value pairs by grouping values for each key.i.e. reduce(k2, list(v2)) ---> list(v3)Can also define an option function “Combiner” (to optimize bandwidth)If defined, runs after Mapper & before Reducer on every node that has run a map taskCombiner receives as input all data emitted by the Mapper instances on a given nodeCombiner output sent to the Reducers, instead of the output from the MappersIs a "mini-reduce" process which operates only on data generated by one machineHow does MapReduce work?MapReduce is usually applied to huge datasets. A MapReduce job splits the input data into smaller independent chunks called partitions and then processes them independently using map tasks and reduce tasks. Below is an example.MapReduce Word Count (Pseudocode)map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1"); reduce(String output_key, Iterator intermediate_values): // output_key: a word // output_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));MapReduceApply a function to all the elements of Listlist1=[1,2,3,4,5]; square x = x * x list2=Map square(list1) print list2 -> [1,4,9,16,25]0Combine all the elements of list for a summarylist1 = [1,2,3,4,5]; A = reduce (+) list1 Print A -> 15Apache Spark vs MapReduceAfter getting off hangover how Apache Spark and MapReduce works, we need to understand how these two technologies compare with each other, what are their pros and cons, so as to get a clear understanding which technology fits our use case.As we can see, MapReduce involves at least 4 disk operations while Spark only involves 2 disk operations. This is one reason for Spark to be much faster than MapReduce. Spark also caches intermediate data which can be used in further iterations helping Spark improve its performance further. The more iterative the process the better is the Spark performance due to in-memory processing and caching. This is where MapReduce performance not as good as Spark due to disk read/write operations for every iteration.Let’s see a comparison between Spark and MapReduce on different other parameters to understand where to use Spark and where to use MapReduceAttributesMapReduceApache SparkSpeed/PerformanceMapReduce is designed for batch processing and is not as fast as Spark. It is used for gathering data from multiple sources and process it once and store in a distributed data store like HDFS. It is best suited where memory is limited and processing data size is so big that it would not fit in the available memory.Spark is 10-100 times faster because of in-memory processing and its caching mechanism. It can deliver near real-time analytics. It is used in Credit Card Processing, Fraud detection, Machine learning and data analytics, IoT sensors etcCostAs it is part of Apache Open Source there is no software cost.Hardware cost is less in MapReduce as it works with smaller memory(RAM) as compared to Spark. Even commodity hardware is sufficient.Spark also is Apache Open Source so no license cost.Hardware cost is more than MapReduce as even though Spark can work on commodity hardware it needs a lot more memory(RAM) as compared to MapReduce since it should be able to fit all the data in Memory for optimal performance. Cluster needs little high-end commodity hardware with lots of RAM else performance gets hitEase of UseMapReduce is a bit complex to write. MapReduce is written in Java and the APIs are a bit complex to code for new programmers, so there is a steep learning curve involved. The Pig has SQL like syntax and it is easier for SQL developers to get onboard easily. Also, there is no interactive mode available in MapReduceSpark has APIs in Scala, Java, Python, R for all basic transformations and actions. It also has rich Spark SQL APIs for SQL savvy developers and it covers most of the SQL functions and is adding more functions with each new release. Also, Spark has scope for writing User Defined Analytical Functions and Functions (UDF/UDAF) for anyone who would like to have custom functions.CompatibilityMapReduce is also compatible with all data sources and file formats Hadoop supports. But MapReduce needs another Scheduler like YARN or Mesos to run, it does not have any inbuilt Scheduler like Spark’s default/standalone scheduler.Apache Spark can in standalone mode using default scheduler. It can also run on YARN or Mesos. It can run on-premise or on the cloud. Spark supports most of the data formats like parquet, Avro, ORC, JSON, etc. It also supports multiple languages and has APIs for Java, Scala, Python, R.Data ProcessingMapReduce can only be used for batch processing where throughput is more important and latency can be compromised.Spark supports Batch as well as Stream processing, so fits both use cases and can be used for Lambda design where applications need both Speed layer and slower layer/data processing layerSecurityMapReduce has more security features.MapReduce can enjoy all the Hadoop security benefits and integrate with Hadoop security projects, like Knox Gateway and Sentry.Spark is a bit bare at the moment. Spark currently supports authentication via a shared secret. Spark can integrate with HDFS and it can use HDFS ACLs and file-level permissions. Spark can also run on YARN leveraging the capability of Kerberos.Fault ToleranceMapReduce uses replication for fault tolerance. If any slave daemon fails, master daemons reschedule all pending and in-progress operations to another slave. This method is effective, but it can significantly increase the completion times for operations with a single failure alsoIn Spark, RDDs are the building blocks and Spark also uses it RDDs and DAG for fault tolerance. If an RDD is lost, it will automatically be recomputed by using the original transformations.LatencyMapReduce has high latencySpark provides low latency performanceInteractive ModeMapReduce does not have any interactive mode of operation.Spark can be used interactively also for data processing. It has out-of-the-box support for spark shell for scala/python/RMachine Learning/Graph ProcessingNo support for these. A mahout has to be used for MLSpark has dedicated modules for ML and Graph processingConclusion:Both these technologies MapReduce and Spark have pros and cons:MapReduce is best suited for Analysis of archived data where data size is huge and it is not going to fit in memory, and if the instant results and intermediate solutions are not required. MapReduce also scales very well and the cluster can be horizontally scaled with ease using commodity machines.Offline Analytics is a good fit for MapReduce like Top Products per month, Unique clicks per banner.MapReduce is also suited for Web Crawling as well as Crawling tweets at scale and NLP like Sentiment Analysis.Another use case for MapReduce is de-duplicating data from social networking sites, job sites and other similar sites.MapReduce is also heavily used in Data mining for Generating the model and then classifying.Spark is fast and so can be used in Near Real Time data analysis.A lot of organizations are moving to Spark as their ETL processing layer from legacy ETL systems like Informatica. Spark as very good and optimized SQL processing module which fits the ETL requirements as it can read from multiple sources and can also write to many kinds of data sources.Spark can also handle Streaming data so its best suited for Lambda design.Most graph processing algorithms like page rank perform multiple iterations over the same data and this requires a message passing mechanism. Spark has great support for Graph processing using GraphX module.Almost all machine learning algorithms work iteratively. Spark has a built-in scalable machine learning library called MLlib which contains high-quality algorithms that leverage iterations and yields better results than one pass approximations sometimes used on MapReduce.Hadoop MapReduce is more mature as it has been there for a longer time and its support is also better in the open source community. It can be beneficial for really big data use case where memory is limited and data will not fit the RAM. Most of the time, Spark use case will involve Hadoop and other tools like Hive, Pig, Impala and so when these technologies complement each other it will be a win for both Spark and MapReduce.
Rated 4.5/5 based on 18 customer reviews
8656
Apache Spark Vs Hadoop MapReduce

Why we need Big Data frameworksBig data is primari... Read More

Useful links