Search

Analysis Of Big Data Using Spark And Scala

The use of Big Data over a network cluster has become a major application in multiple industries. The wide use of MapReduce and Hadoop technologies is proof of this evolving technology, along with the recent rise of Apache Spark, a data processing engine written in Scala programming language. Introduction to Scala Scala is a general purpose object-oriented programming language, similar to Java programming. Scala is an acronym for “Scalable language” meaning its capabilities can grow along the lines of your requirements & also there are more technologies built on scala. The capabilities of Scala programming can range from a simple scripting language to the preferred language for mission-critical applications. Scala has the following capabilities: Support for functional programming, with features including currying, type interference, immutability, lazy evaluation, and pattern matching. An advanced type system including algebraic data types and anonymous types. Features that are not available in Java, like operator overloading, named parameters, raw strings, and no checked exceptions. Scala can run seamlessly on a Java Virtual Machine (JVM), and Scala and Java classes can be freely interchanged or can refer to each other. Scala also supports cluster computing, with the most popular framework solution, Spark, which was written using Scala. Introduction to Apache Spark Apache Spark is an open-source Big Data processing framework that provides an interface for programming data clusters using data parallelism and fault tolerance. Apache Spark is widely used for fast processing of large datasets. Apache Spark is an open-source platform, built by a wide set of software developers from over 200 companies. Since 2009, more than 1000 developers have contributed to Apache Spark. Apache Spark provides better capabilities for Big Data applications, as compared to other Big Data technologies such as Hadoop or MapReduce. Listed below are some features of Apache Spark: Comprehensive framework Spark provides a comprehensive and unified framework to manage Big Data processing, and supports a diverse range of data sets including text data, graphical data, batch data, and real-time streaming data. Speed Spark can run programs up to 100 times faster than Hadoop clusters in memory, and 10 times faster when running on disk. Spark has an advanced DAG (directed acrylic graph) execution engine that provides support for cyclic data flow and in-memory data sharing across DAGs to execute different jobs with the same data. Easy to use With a built-in set of over 80 high-level operators, Spark allows programmers to write Java, Scala, or Python applications in quick time. Enhanced support In addition to Map and Reduce operations, Spark provides support for SQL queries, streaming data, machine learning, and graphic data processing. Can be run on any platform. Apache Spark applications can be run on a standalone cluster mode or in the cloud. Spark provides access to diverse data structures including HDFS, Cassandra, HBase, Hive, Tachyon, and any Hadoop data source. Spark can be deployed as a standalone server or on a distributed framework such as Mesos or YARN. Flexibility In addition to Scala programming language, programmers can use Java, Python, Clojure, and R to build applications using Spark. Comprehensive library support As a Spark programmer, you can combine additional libraries within the same application, and provide Big Data analytical and Machine learning capabilities. The supported libraries include: Spark Streaming, used for processing of real-time streaming data. Spark SQL, used for exposing Spark datasets over JDBC APIs and for executing SQL-like queries on Spark datasets. Spark MLib, which is the machine learning library, consisting of common algorithms and utilities. Spark GraphX, which is the Spark API for graphs and graphical computation . BlinkDB, a query engine library used for running interactive SQL queries on large data volumes. Tachyon, which is a memory-centric distributed file system to enable file sharing across cluster frameworks. Spark Cassandra Connector and Spark R, which are integration adapters. With Cassandra Connector, Spark can access data from the Cassandra database and perform data analytics. Compatibility with Hadoop and MapReduce Apache Spark can be much faster as compared to other Big Data technologies. Apache Spark can run on an existing Hadoop Distributed File System (HDFS) to provide compatibility along with enhanced functionality. It is easy to deploy Spark applications on existing Hadoop v1 and v2 cluster. Spark uses the HDFS for data storage, and can work with Hadoop-compatible data sources including HBase and Cassandra. Apache Spark is compatible with MapReduce and enhances its capabilities with features such as in-memory data storage and real-time processing. Conclusion The standard API set of Apache Spark framework makes it the right choice for Big Data processing and data analytics. For client installation setups of MapReduce implementation with Hadoop, Spark and MapReduce can be used together for better results. Apache Spark is the right alternative to MapReduce for installations that involve large amounts of data that require low latency processing
Rated 4.5/5 based on 20 customer reviews

Analysis Of Big Data Using Spark And Scala

7K
Analysis Of Big Data Using Spark And Scala

The use of Big Data over a network cluster has become a major application in multiple industries. The wide use of MapReduce and Hadoop technologies is proof of this evolving technology, along with the recent rise of Apache Spark, a data processing engine written in Scala programming language.

Introduction to Scala

Scala is a general purpose object-oriented programming language, similar to Java programming. Scala is an acronym for “Scalable language” meaning its capabilities can grow along the lines of your requirements & also there are more technologies built on scala.

The capabilities of Scala programming can range from a simple scripting language to the preferred language for mission-critical applications.

Scala has the following capabilities:

  • Support for functional programming, with features including currying, type interference, immutability, lazy evaluation, and pattern matching.
  • An advanced type system including algebraic data types and anonymous types.
  • Features that are not available in Java, like operator overloading, named parameters, raw strings, and no checked exceptions.

Scala can run seamlessly on a Java Virtual Machine (JVM), and Scala and Java classes can be freely interchanged or can refer to each other.

Scala also supports cluster computing, with the most popular framework solution, Spark, which was written using Scala.

Introduction to Apache Spark

Apache Spark is an open-source Big Data processing framework that provides an interface for programming data clusters using data parallelism and fault tolerance. Apache Spark is widely used for fast processing of large datasets.

Apache Spark is an open-source platform, built by a wide set of software developers from over 200 companies. Since 2009, more than 1000 developers have contributed to Apache Spark.

Apache Spark provides better capabilities for Big Data applications, as compared to other Big Data technologies such as Hadoop or MapReduce. Listed below are some features of Apache Spark:

  • Comprehensive framework

Spark provides a comprehensive and unified framework to manage Big Data processing, and supports a diverse range of data sets including text data, graphical data, batch data, and real-time streaming data.

  • Speed

Spark can run programs up to 100 times faster than Hadoop clusters in memory, and 10 times faster when running on disk. Spark has an advanced DAG (directed acrylic graph) execution engine that provides support for cyclic data flow and in-memory data sharing across DAGs to execute different jobs with the same data.

  • Easy to use

With a built-in set of over 80 high-level operators, Spark allows programmers to write Java, Scala, or Python applications in quick time.

  • Enhanced support

In addition to Map and Reduce operations, Spark provides support for SQL queries, streaming data, machine learning, and graphic data processing.

  • Can be run on any platform.

Apache Spark applications can be run on a standalone cluster mode or in the cloud. Spark provides access to diverse data structures including HDFS, Cassandra, HBase, Hive, Tachyon, and any Hadoop data source. Spark can be deployed as a standalone server or on a distributed framework such as Mesos or YARN.

  • Flexibility

In addition to Scala programming language, programmers can use Java, Python, Clojure, and R to build applications using Spark.

Comprehensive library support

As a Spark programmer, you can combine additional libraries within the same application, and provide Big Data analytical and Machine learning capabilities.

The supported libraries include:

  • Spark Streaming, used for processing of real-time streaming data.
  • Spark SQL, used for exposing Spark datasets over JDBC APIs and for executing SQL-like queries on Spark datasets.
  • Spark MLib, which is the machine learning library, consisting of common algorithms and utilities.
  • Spark GraphX, which is the Spark API for graphs and graphical computation .
  • BlinkDB, a query engine library used for running interactive SQL queries on large data volumes.
  • Tachyon, which is a memory-centric distributed file system to enable file sharing across cluster frameworks.
  • Spark Cassandra Connector and Spark R, which are integration adapters. With Cassandra Connector, Spark can access data from the Cassandra database and perform data analytics.

Compatibility with Hadoop and MapReduce

Apache Spark can be much faster as compared to other Big Data technologies.

Apache Spark can run on an existing Hadoop Distributed File System (HDFS) to provide compatibility along with enhanced functionality. It is easy to deploy Spark applications on existing Hadoop v1 and v2 cluster. Spark uses the HDFS for data storage, and can work with Hadoop-compatible data sources including HBase and Cassandra.

Apache Spark is compatible with MapReduce and enhances its capabilities with features such as in-memory data storage and real-time processing.

Conclusion

The standard API set of Apache Spark framework makes it the right choice for Big Data processing and data analytics. For client installation setups of MapReduce implementation with Hadoop, Spark and MapReduce can be used together for better results.

Apache Spark is the right alternative to MapReduce for installations that involve large amounts of data that require low latency processing

KnowledgeHut

KnowledgeHut

Author

KnowledgeHut is a fast growing Management Consulting and Training firm that is a source of Intelligent Information support for businesses and professionals across the globe.


Website : http://www.knowledgehut.com/

Join the Discussion

Your email address will not be published. Required fields are marked *

10 comments

Franklyn Enyeart 02 Feb 2017

Heya i’m for the first time here. I came across this board and I find It truly useful & it helped me out a lot. I hope to give something back and help others like you helped me.

masennus 19 Feb 2017

A person essentially assist to make seriously articles I'd state. This is the very first time I frequented your web page and up to now? I amazed with the analysis you made to create this actual submit extraordinary. Great activity!

Carroll Hussar 27 Feb 2017

"Great Blogpost! Enjoying the information on this website, you’ve done a _superb job on the content."

Marybeth Engelbach 02 Mar 2017

"Great Blogpost! I couldn’t refrain from commenting. _Perfectly written!"

IT training institutes in Chennai 29 Mar 2017

Are you doing any corporate training ?

KnowledgeHut 30 Mar 2017

Hello ! We give corporate training

manicure 07 Apr 2017

It's awesome in favor of me to have a web site, which is beneficial designed for my knowledge. thanks admin

Discover More 12 Apr 2017

I just want to mention I am just all new to blogging and certainly savored you're web site. Almost certainly I’m going to bookmark your website . You surely have wonderful posts. Many thanks for sharing with us your webpage.

allison cornell 03 Jul 2017

This excellent website certainly has all of the info I needed concerning this subject and didn't know who to ask.

KnowledgeHut 03 Jul 2017

For any concerns please revert back to us at editor@knowledgehut.co

Suggested Blogs

What is Big Data — An Introductory Guide

The massive world of Big DataIf one strolls around any IT office premises, over every decade (nowadays time span is even lesser, almost every 3-4 years) one would overhear professionals discussing new jargons from the hottest trends in technology. Around 5 -6 years ago, one such word has started ruling IT services is ‘BIG data’ and still has been interpreted by a layman to tech geeks in various ways.Although services industries started talking about big data solutions widely from 5-6 years, it is believed that the term was in use since the 1990s by John Mashey from Silicon Graphics, whereas credit for coining the term ‘big data’ aligning to its modern definition goes to Roger Mougalas from O’Reilly Media in 2005.Let’s first understand why everyone going gaga about ‘BIG data’ and what are the real-world problems it is supposed to solve and then we will try to answer what and how aspects of it.Why is Big Data essential for today’s digital world?Pre smart-phones era, internet and web world were around for many years, but smart-phones made it mobile with on-the-go usage. Social Media, mobile apps started generating tons of data. At the same time, smart-bands, wearable devices ( IoT, M2M ), have given newer dimensions for data generation. This newly generated data became a new oil to the world. If this data is stored and analyzed, it has the potential to give tremendous insights which could be put to use in numerous ways.You will be amazed to see the real-world use cases of BIG data. Every industry has a unique use case and is even unique to every client who is implementing the solutions. Ranging from data-driven personalized campaigning (you do see that item you have browsed on some ‘xyz’ site onto Facebook scrolling, ever wondered how?) to predictive maintenance of huge pipes across countries carrying oils, where manual monitoring is practically impossible. To relate this to our day to day life, every click, every swipe, every share and every like we casually do on social media is helping today’s industries to take future calculated business decisions. How do you think Netflix predicted the success of ‘House of Cards’ and spent $100 million on the same? Big data analytics is the simple answer.Talking about all this, the biggest challenge in the past was traditional methods used to store, curate and analyze data, which had limitations to process this data generated from newer sources and which were huge in volumes generated from heterogeneous sources and was being generated  really fast(To give you an idea, roughly 2.5 quintillion data is generated per day as on today – Refer infographic released by Domo called “Data Never Sleeps 5.0.” ), Which given rise to term BIG data and related solutions.Understanding Big Data: Experts’ viewpoint BIG data literally means Massive data (loosely > 1TB) but that’s not the only aspect of it. Distributed data or even complex datasets which could not be analyzed through traditional methods can be categorized into ‘Big data’ and hence Big data theoretical definition makes a lot of sense with this background:“Gartner (2012) defines, Big data is high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”Generic data possessing characteristics of big data are 3Vs namely Variety, Velocity, and VolumeBut due to the changing nature of data in today’s world and to gain most insights of it, 3 more Vs are added to the definition of BIG DATA, namely Variability, Veracity and Value.The diagram below illustrates each V in detail:Diagram: 6 V’s of Big DataThis 6Vs help understanding the characteristics of “BIG Data” but let’s also understand types of data in BIG Data processing.  “Variety” of above characteristics caters to different types of data can be processed through big data tools and technologies. Let’s drill down a bit for understanding what those are:Structured ex. Mainframes, traditional databases like Teradata, Netezza, Oracle, etc.Unstructured ex. Tweets, Facebook posts, emails, etc.Semi/Multi structured or Hybrid ex. E-commerce, demographic, weather data, etc.As the technology is advancing, the variety of data is available and its storage, processing, and analysis are made possible by big data. Traditional data processing techniques were able to process only structured data.Now, that we understand what big data and limitations of old traditional techniques are of handling such data, we could safely say, we need new technology to handle this data and gain insights out of it. Before going further, do you know, what were the traditional data management techniques?Traditional Techniques of Data Processing are:RDBMS (Relational Database Management System)Data warehousing and DataMartOn a high level, RDBMS catered to OLTP needs and data warehousing/DataMart facilitated OLAP needs. But both the systems work with structured data.I hope. now one can answer, ‘what is big data?’ conceptually and theoretically both.So, it’s time that we understand how it is being done in actual implementations.only storing of “big data” will not help the organizations, what’s important is to turn data into insights and business value and to do so, following are the key infrastructure elements:Data collectionData storageData analysis andData visualization/outputAll major big data processing framework offerings are based on these building blocks.And in an alignment of the above building blocks, following are the top 5 big data processing frameworks that are currently being used in the market:1. Apache Hadoop : Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.First up is the all-time classic, and one of the top frameworks in use today. So prevalent is it, that it has almost become synonymous with Big Data.2 Apache Spark : unified analytics engine for large-scale data processing.Apache Spark and Hadoop are often contrasted as an "either/or" choice,  but that isn't really the case.Above two frameworks are popular but apart from that following 3 are available and are comparable frameworks:3. Apache Storm : free and open source distributed real-time computation system. You can also take up Apache Storm training to learn more about Apache Storm.4. Apache Flink : streaming dataflow engine, aiming to provide facilities for distributed computation over streams of data. Treating batch processes as a special case of streaming data, Flink is effectively both batch and real-time processing framework, but one which clearly puts streaming first.5. Apache Samza : distributed Stream processing framework.Frameworks help processing data through building blocks and generate required insights. The framework is supported by the whopping number of tools providing the required functionality.Big Data processing frameworks and technology landscapeBig data tools and technology landscape can be better understood with layered big data architecture. Give a good read to a great article by Navdeep singh Gill on XENONSTACK for understanding the layered architecture of big data.By taking inspiration from layered architecture, different available tools in the market are mapped to layers to understand big data technology landscape in depth. Note that, layered architecture fits very well with infrastructure elements/building blocks discussed in the above section.Few of the tools are briefed below for further understanding:  1. Data Collection / Ingestion Layer Cassandra: is a free and open-source, distributed, wide column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failureKafka: is used for building real-time data pipelines and streaming apps. Event streaming platformFlume: log collector in HadoopHBase: columnar database in Hadoop2. Processing Layer Pig: scripting language in the Hadoop frameworkMapReduce: processing language in Hadoop3. Data Query Layer Impala: Cloudera Impala:  modern, open source, distributed SQL query engine for Apache Hadoop. (often compared with hive)Hive: Data Warehouse software for data Query and analysisPresto: Presto is a high performance, distributed SQL query engine for big data. Its architecture allows users to query a variety of data sources such as Hadoop, AWS S3, Alluxio, MySQL, Cassandra, Apache Kafka, and MongoDB4. Analytical EngineTensorFlow: n source machine learning library for research and production.5. Data storage LayerIgnite: open-source distributed database, caching and processing platform designed to store and compute on large volumes of data across a cluster of nodesPhoenix: hortonworks: Apache Phoenix is an open source, massively parallel, relational database engine supporting OLTP for Hadoop using Apache HBase as its backing storePolyBase: s a new feature in SQL Server 2016. It is used to query relational and non-relational databases (NoSQL). You can use PolyBase to query tables and files in Hadoop or in Azure Blob Storage. You can also import or export data to/from Hadoop.Sqoop: ETL toolBig data in EXCEL: Few people like to process big datasets with current excel capabilities and it's known as Big Data in Excel6. Data Visualization LayerMicrosoft HDInsight: Azure HDInsight is a Hadoop service offering hosted in Azure that enables clusters of managed Hadoop instances. Azure HDInsight deploys and provisions Apache Hadoop clusters in the cloud, providing a software framework designed to manage, analyze, and report on big data with high reliability and availability. Hadoop administration training will give you all the technical understanding required to manage a Hadoop cluster, either in a development or a production environment.Best Practices in Big Data  Every organization, industry, business, may it be small or big wants to get benefit out of “big data” but it's essential to understand that it can prove of maximum potential only if organization adhere to best practices before adapting big data:Answering 5 basic questions help clients know the need for adapting Big Data for organizationTry to answer why Big Data is required for the organization. What problem would it help solve?Ask the right questions.Foster collaboration between business and technology teams.Analyze only what is required to use.Start small and grow incrementally.Big Data industry use-cases We talked about all the things in the Big Data world except real use cases of big data. In the starting, we did discuss few but let me give you insights into the real world and interesting big data use cases and for a few, it’s no longer a secret ☺. In fact, it’s penetrating to the extent you name the industry and plenty of use cases can be told. Let’s begin.1.Streaming PlatformsAs I had given an example of ‘House of Cards’ at the start of the article, it’s not a secret that Netflix uses Big Data analytics. Netflix spent $100mn on 26 episodes of ‘House of Cards’ as they knew the show would appeal to viewers of original British House of Cards and built in director David Fincher and actor Kevin Spacey. Netflix typically collects behavioral data and it then uses this data to create a better experience for the user.But Netflix uses Big Data for more than that, they monitor and analyze traffic details for various devices, spot problem areas and adjust network infrastructure to prepare for future demand. (later is action out of Big Data analytics, how big data analysis is put to use). They also try to get insights into types of content viewers to prefer and help them make informed decisions.   Apart from Netflix, Spotify is also a known great use case.2. Advertising and Media / Campaigning /EntertainmentFor decades marketers were forced to launch campaigns while blindly relying on gut instinct and hoping for the best. That all changed with digitization and big data world. Nowadays, data-driven campaigns and marketing is on the rise and to be successful in this landscape, a modern marketing campaign must integrate a range of intelligent approaches to identify customers, segment, measure results, analyze data and build upon feedback in real time. All needs to be done in real time, along with the customer’s profile and history, based on his purchasing patterns and other relevant information and Big Data solutions are the perfect fit.Event-driven marketing is also could be achieved through big data, which is another way of successful marketing in today’s world. That basically indicates, keeping track of events customer are directly and indirectly involved with and campaign exactly when a customer would need it rather than random campaigns. For. Ex if you have searched for a product on Amazon/Flipkart, you would see related advertisements on other social media apps you casually browse through. Bang on, you would end up purchasing it as you anyway needed options best to choose from.3. Healthcare IndustryHealthcare is one of the classic use case industries for Big Data applications. The industry generates a huge amount of data.Patients medical history, past records, treatments given, available and latest medicines, Medicinal latest available research the list of raw data is endless.All this data can help give insights and Big Data can contribute to the industry in the following ways:Diagnosis time could be reduced, and exact requirement treatment could be started immediately. Most of the illnesses could be treated if a diagnosis is perfect and treatment can be started in time. This can be achieved through evidence-based past medical data available for similar treatments to doctor treating the illness, patients’ available history and feeding symptoms real-time into the system.  Government Health department can monitor if a bunch of people from geography reporting of similar symptoms, predictive measures could be taken in nearby locations to avoid outbreak as a cause for such illness could be the same.   The list is long, above were few representative examples.4. SecurityDue to social media outbreak, today, personal information is at stake. Almost everything is digital, and majority personal information is available in the public domain and hence privacy and security are major concerns with the rise in social media. Following are few such applications for big data.Cyber Crimes are common nowadays and big data can help to detect, predicting crimes.Threat analysis and detection could be done with big data.  5. Travel and TourismFlight booking sites, IRCTC track the clicks and hits along with IP address, login information, and other details and as per demand can do dynamic pricing for the flights/ trains. Big Data helps in dynamic pricing and mind you it’s real time. Am sure each one of us has experienced this. Now you know who is doing it :DTelecommunications, Public sector, Education, Social media and gaming, Energy and utility every industry have implemented are implementing several of these Big Data use cases day in and day out. If you look around am sure you would find them on the rise.Big Data is helping everyone industries, consumers, clients to make informed decisions, whatever it may be and hence wherever there is such a need, Big Data can come handy.Challenges faced by Big Data in the real world for adaptationAlthough the world is going gaga about big data, there are still a few challenges to implement and adopt Big Data and hence service industries are still striving towards resolving those challenges to implement best Big Data solution without flaws.An October 2016 report from Gartner found that organizations were getting stuck at the pilot stage of their big data initiatives. "Only 15 percent of businesses reported deploying their big data project to production, effectively unchanged from last year (14 per cent)," the firm said.Let’s discuss a few of them to understand what are they?1. Understanding Big Data and answering Why for the organization one is working with.As I started the article saying there are many versions of Big Data and understanding real use cases for organization decision makers are working with is still a challenge. Everyone wants to ride on a wave but not knowing the right path is still a struggle. As every organization is unique thus its utmost important to answer ‘why big data’ for each organization. This remains a major challenge for decision makers to adapt to big data.2. Understanding Data sources for the organizationIn today’s world, there are hundreds and thousands of ways information is being generated and being aware of all these sources and ingest all of them into big data platforms to get accurate insight is essential. Identifying sources is a challenge to address.It's no surprise, then, that the IDG report found, "Managing unstructured data is growing as a challenge – rising from 31 per cent in 2015 to 45 per cent in 2016."Different tools and technologies are on the rise to address this challenge.3. Shortage if Big Data Talent and retaining themBig Data is changing technology and there are a whopping number of tools in the Big Data technology landscape. It is demanded out of Big Data professionals to excel in those current tools and keep up self to ever-changing needs. This gets difficult for employees and employers to create and retain talent within the organization.The solution to this would be constant upskilling, re-skilling and cross-skilling and increasing budget of organization for retaining talent and help them train.4. The Veracity VThis V is a challenge as this V means inconsistent, incomplete data processing. To gain insights through big data model, the biggest step is to predict and fill missing information.This is a tricky part as filling missing information can lead to decreasing accuracy of insights/ analytics etc.To address this concern, there is a bunch of tools. Data curation is an important step in big data and should have a proper model. But also, to keep in mind that Big Data is never 100% accurate and one must deal with it.5. SecurityThis aspect is given low priority during the design and build phases of Big Data implementations and security loopholes can cost an organization and hence it’s essential to put security first while designing and developing Big Data solutions. Also, equally important to act responsibly for implementations for regulatory requirements like GDPR.  6. Gaining Valuable InsightsMachine learning data models go through multiple iterations to conclude on insights as they also face issues like missing data and hence the accuracy. To increase accuracy, lots of re-processing is required, which has its own lifecycle. Increasing accuracy of insights is a challenge and which relates to missing data piece. Which most likely can be addressed by addressing missing data challenge.This can also be caused due to unavailability of information from all data sources. Incomplete information would lead to incomplete insights which may not benefit to required potential.Addressing these discussed challenges would help to gain valuable insights through available solutions.With Big Data, the opportunities are endless. Once understood, the world is yours!!!!Also, now that you understand BIG DATA, it's worth understanding the next steps:Gary King, who is a professor at Harvard says “Big data is not about the data. It is about the analytics”You can also take up Big Data and Hadoop training to enhance your skills furthermore.Did the article helps you to understand today’s massive world of big data and getting a sneak peek into it Do let us know through the comment section below?
Rated 4.5/5 based on 11 customer reviews
6631
What is Big Data — An Introductory Guide

The massive world of Big DataIf one strolls around... Read More

How to Install Spark on Ubuntu

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.In this article, we will cover the installation procedure of Apache Spark on the Ubuntu operating system.PrerequisitesThis guide assumes that you are using Ubuntu and Hadoop 2.7 is installed in your system.System requirementsUbuntu OS Installed.Minimum of 8 GB RAM.At least 20 GB free space.PrerequisitesJava8 should be installed in your Machine.Hadoop should be installed in your Machine.Installation ProcedureMaking system ready:Before installing Spark ensure that you have installed Java8 in your Ubuntu Machine. If not installed, please follow below process to install java8 in your Ubuntu System.a. Install java8 using below command.sudo apt-get install oracle-java8-installerAbove command creates java-8-oracle Directory in /usr/lib/jvm/ directory in your machine. It looks like belowNow we need to configure the JAVA_HOME path in .bashrc file..bashrc file executes whenever we open the terminal.b. Configure JAVA_HOME and PATH  in .bashrc file and save. To edit/modify .bashrc file, use below command.vi .bashrc Then press i(for insert) -> then Enter below line at the bottom of the file.export JAVA_HOME= /usr/lib/jvm/java-8-oracle/ export PATH=$PATH:$JAVA_HOME/binBelow is the screen shot of that.Then Press Esc -> wq! (For save the changes) -> Enter.c. Now test Java installed properly or not by checking the version of Java. Below command should show the java version.java -versionBelow is the screenshotInstalling Spark on the System:Go to the below official download page of Apache Spark and choose the latest release. For the package type, choose ‘Pre-built for Apache Hadoop’.https://spark.apache.org/downloads.htmlThe page will look like belowOr You can use a direct link to download.https://www.apache.org/dyn/closer.lua/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgzCreating Spark directoryCreate a directory called spark under /usr/ directory. Use below command to create spark directorysudo mkdir /usr/sparkAbove command asks password to create spark directory under the /usr directory, you can give the password. Then check spark directory is created or not in the /usr directory using below commandll /usr/It should give the below results with ‘spark’ directoryGo to /usr/spark directory. Use below command to go spark directory.cd /usr/sparkDownload Spark versionDownload spark2.3.3 in spark directory using below commandwget https://www.apache.org/dyn/closer.lua/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgzIf use ll or ls command, you can see spark-2.4.0-bin-hadoop2.7.tgz in spark directory.Extract Spark fileThen extract spark-2.4.0-bin-hadoop2.7.tgz using below command.sudo tar xvzf spark-2.4.0-bin-hadoop2.7Now spark-2.4.0-bin-hadoop2.7.tgz file is extracted as spark-2.4.0-bin-hadoop2.7Check whether it extracted or not using ll command. It should give the below results.ConfigurationConfigure SPARK_HOME path in the .bashrc file by following below steps.Go to the home directory using below commandcd ~Open the .bashrc file using below commandvi .bashrcNow we will configure SPARK_HOME and PATHpress i for insert the enter SPARK_HOME and PATH  like belowSPARK_HOME=/usr/spark/spark-2.4.0-bin-hadoop2.7PATH=$PATH:$SPARK_HOME/binIt looks like belowThen save and exit by entering below commands.Press Esc -> wq! -> EnterTest Installation:Now we can verify spark is successfully installed in our Ubuntu Machine or not. To verify use below command then enter.spark-shell Above command should show below screenNow we have successfully installed spark on Ubuntu System. Let’s create RDD and Dataframe then we will end up.a. We can create RDD in 3 ways, we will use one way to create RDD.Define any list then parallelize it. It will create RDD. Below are the codes. Copy paste it one by one on the command line.val nums = Array(1,2,3,5,6) val rdd = sc.parallelize(nums)Above will create RDD.b. Now we will create a Data frame from RDD. Follow the below steps to create Dataframe.import spark.implicits._ val df = rdd.toDF("num")Above code will create Dataframe with num as a column.To display the data in Dataframe use below commanddf.show()Below is the screenshot of the above code.How to uninstall Spark from Ubuntu System: You can follow the below steps to uninstall spark on Windows 10.Remove SPARK_HOME from the .bashrc file.To remove SPARK_HOME variable from the .bashrc please follow below stepsGo to the home directory. To go to home directory use below command.cd ~Open .bashrc file. To open .bashrc file use below command.vi .bashrcPress i for edit/delete SPARK_HOME from .bashrc file. Then find SPARK_HOME the delete SPARK_HOME=/usr/spark/spark-2.4.0-bin-hadoop2.7 line from .bashrc file and save. To do follow below commandsThen press Esc -> wq! -> Press EnterWe will also delete downloaded and extracted spark installers from the system. Please do follow below command.rm -r ~/sparkAbove command will delete spark directory from the system.Open Command Line Interface then type spark-shell,  then press enter, now we get an error.Now we can confirm that Spark is successfully uninstalled from the Ubuntu System. You can also learn more about Apache Spark and Scala here.
Rated 4.5/5 based on 19 customer reviews
9899
How to Install Spark on Ubuntu

Apache Spark is a fast and general-purpose cluster... Read More

How to install Apache Spark on Windows?

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.In this document, we will cover the installation procedure of Apache Spark on Windows 10 operating systemPrerequisitesThis guide assumes that you are using Windows 10 and the user had admin permissions.System requirements:Windows 10 OSAt least 4 GB RAMFree space of at least 20 GBInstallation ProcedureStep 1: Go to the below official download page of Apache Spark and choose the latest release. For the package type, choose ‘Pre-built for Apache Hadoop’.The page will look like below.Step 2:  Once the download is completed unzip the file, to unzip the file using WinZip or WinRAR or 7-ZIP.Step 3: Create a folder called Spark under your user Directory like below and copy paste the content from the unzipped file.C:\Users\\SparkIt looks like below after copy-pasting into the Spark directory.Step 4: Go to the conf folder and open log file called, log4j.properties. template. Change INFO to WARN (It can be ERROR to reduce the log). This and next steps are optional.Remove. template so that Spark can read the file.Before removing. template all files look like below.After removing. template extension, files will look like belowStep 5: Now we need to configure path.Go to Control Panel -> System and Security -> System -> Advanced Settings -> Environment VariablesAdd below new user variable (or System variable) (To add new user variable click on New button under User variable for )Click OK.Add %SPARK_HOME%\bin to the path variable.Click OK.Step 6: Spark needs a piece of Hadoop to run. For Hadoop 2.7, you need to install winutils.exe.You can find winutils.exe from below pageDownload it.Step 7: Create a folder called winutils in C drive and create a folder called bin inside. Then, move the downloaded winutils file to the bin folder.C:\winutils\binAdd the user (or system) variable %HADOOP_HOME% like SPARK_HOME.Click OK.Step 8: To install Apache Spark, Java should be installed on your computer. If you don’t have java installed in your system. Please follow the below processJava Installation Steps:Go to the official Java site mentioned below  the page.Accept Licence Agreement for Java SE Development Kit 8u201Download jdk-8u201-windows-x64.exe fileDouble Click on Downloaded .exe file, you will the window shown below.Click Next.Then below window will be displayed.Click Next.Below window will be displayed after some process.Click Close.Test Java Installation:Open Command Line and type java -version, then it should display installed version of JavaYou should also check JAVA_HOME and path of %JAVA_HOME%\bin included in user variables (or system variables)1. In the end, the environment variables have 3 new paths (if you need to add Java path, otherwise SPARK_HOME and HADOOP_HOME).2. Create c:\tmp\hive directory. This step is not necessary for later versions of Spark. When you first start Spark, it creates the folder by itself. However, it is the best practice to create a folder.C:\tmp\hiveTest Installation:Open command line and type spark-shell, you get the result as below.We have completed spark installation on Windows system. Let’s create RDD and     Data frameWe create one RDD and Data frame then will end up.1. We can create RDD in 3 ways, we will use one way to create RDD.Define any list then parallelize it. It will create RDD. Below is code and copy paste it one by one on the command line.val list = Array(1,2,3,4,5) val rdd = sc.parallelize(list)Above will create RDD.2. Now we will create a Data frame from RDD. Follow the below steps to create Dataframe.import spark.implicits._ val df = rdd.toDF("id")Above code will create Dataframe with id as a column.To display the data in Dataframe use below command.Df.show()It will display the below output.How to uninstall Spark from Windows 10 System: Please follow below steps to uninstall spark on Windows 10.Remove below System/User variables from the system.SPARK_HOMEHADOOP_HOMETo remove System/User variables please follow below steps:Go to Control Panel -> System and Security -> System -> Advanced Settings -> Environment Variables, then find SPARK_HOME and HADOOP_HOME then select them, and press DELETE button.Find Path variable Edit -> Select %SPARK_HOME%\bin -> Press DELETE ButtonSelect % HADOOP_HOME%\bin -> Press DELETE Button -> OK ButtonOpen Command Prompt the type spark-shell then enter, now we get an error. Now we can confirm that Spark is successfully uninstalled from the System.
Rated 4.5/5 based on 1 customer reviews
8641
How to install Apache Spark on Windows?

Apache Spark is a fast and general-purpose cluster... Read More

Useful links