HomeBlogBig DataFundamentals of Apache Spark

Fundamentals of Apache Spark

03rd May, 2024
view count loader
Read it in
13 Mins
In this article
    Fundamentals of Apache Spark


    Before getting into the fundamentals of Apache Spark, let’s understand What really is ‘Apache Spark’ is? Following is the authentic one-liner definition.

    Apache Spark is a fast and general-purpose, cluster computing system.

    One would find multiple definitions when you search the term Apache Spark. All of those give similar gist, just different words. Let’s understand these special keywords which describe Apache Spark.

    Fast: As spark uses in-memory computing it’s fast. It can run queries 100x faster. We will get to details of architecture later to understand this aspect better little later in the article. One would find the keywords ‘Fast’ and/or ‘In-memory’ in all the definitions.

    General Purpose: Apache spark is a unified framework. It provides one execution model for all tasks and hence very easy for developers to learn and they can work with multiple APIs easily. Spark offers over 80 high-level operators that make it easy to build parallel apps and one can use it interactively from the Scala, Python, R, and SQL shells.

    Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.

    Cluster Computing: Efficient processing of data on Set of computers (Refer commodity hardware here) or distributed systems. It’s also called a Parallel Data processing Engine in a few definitions. Spark is utilized for Big data analytics and related processing.

    One more important keyword associated with Spark is Open Source. It was open-sourced in 2010 under a   BSD license.

    Spark (and its RDD) was developed(earliest version as it’s seen today), in 2012, in response to limitations in the   MapReduce cluster computing paradigm. Spark is commonly seen as an in-memory replacement of MapReduce.

    Since its release, Apache Spark has seen rapid adoption due to its characteristics briefly discussed above.

    Who should go for Apache Spark

    Before trying to find out whether Apache spark is for me? Or whether I have the right skill set, It's important to focus on the generality characteristic in further depth.

    Apache Spark consists of Spark Core and a set of libraries. The core is the distributed execution engine and the Java, Scala, and Python APIs offer a platform for distributed ETL application development. Additional libraries, built atop the core, allow diverse workloads for streaming, SQL, and machine learning.

    Who should go for Apache Spark

    As Spark provides these multiple components, it’s evident that Spark is developed and widely utilized for big data and analytics.

    Professionals who should learn Apache Spark

    If one is aspiring to be landed into the following professions or anyone who has an interest in data and insights, Knowledge of spark will prove useful:

    • Data Scientists
    • Data Engineers

    Prerequisites of learning Apache Spark

    Most of the students looking for big data training, Apache spark is number one framework in big data. So most of the knowledge seekers looking for spark training, it is important to note that there are few prerequisites to learn apache spark.

    Before getting into Big data, you must have minimum knowledge on:

    • Anyone of the programming languages >> Core   Python or Scala.
    • Spark installations can be done on any platform but its framework is similar to Hadoop and hence having knowledge of HDFS and YARN is highly recommended. Having knowledge of Hive is an added advantage but is not mandatory.
    • Basic knowledge of SQL. In SQL mainly select * from, joins and group by these three commands highly recommended.
    • Optionally, knowing any cloud technology like AWS. Recommended for those who want to work with production-like environments.

    System requirements of Apache Spark

    Official site for  Apache Spark gives following recommendation (Traverse link for further details)

    Storage System: There are few ways to set this up as follows: 

    Spark can run on the same node as HDFS. Spark standalone node cluster can be installed on the same nodes and configure Spark and Hadoop memory and CPU usage accordingly to avoid any interference.
    1. Hadoop and Spark can execute on common Resource Manager ( Ex. Yarn etc)
    2. Spark will be executing in same Local Area Network as HDFS but on separate nodes.
    3. If a requirement is a quick response and low latency from data stores then execute compute jobs on separate nodes than that of storage nodes.

    Local Disks: Typically 4-8 disks per node, configured without RAID.
    If underline OS is Linux then mount the disk with noatime option and in Spark environment configure spark.local.dir variable to be a comma-separated list of local disks.
    Note: For HDFS, it can be the same disk as HDFS.

    Memory: Minimum 8GB - 100s of GBs of memory per machine.
    A recommendation is the allocation of 75% of the memory to Spark.

    Network: 10GB or faster speed network.

    CPU cores: 8-16 Cores per machine

    However, for Training and Learning purpose and just to taste Spark, following are two available options:

    1. Run it locally
    2. Use AWS EMR (Or any cloud computing service)

    For learning purposes, minimum 4gb ram system with minimum 30gb disk may prove enough.

    History of Apache Spark

    History of Apache Spark

    Spark was primarily developed to Overcome the Limitations of MapReduce.

    Versioning: Spark initial version was version 0, version 1.6 is assumed to be a stable version and is being used in multiple commercial corporate projects. Version 2.3 is the latest available version.

    MapReduce is cluster computing  paradigm, which forces a particular linear  data flow structure on distributed programs: MapReduce programs read input data from disk,  map a function across the data,  reduce the results of the map, and store reduction results on disk.

    1. Due to multiple copies of data and multiple I/O as described, MapReduce takes lots of time to process the volume of data.
    2. MapReduce can do only batch time processing and is unsuitable for real-time data processing.
    3. It is unsuitable for trivial join like transformations. 
    4. It’s unfit for large data on a network and also with OLTP data.
    5. Also, not suitable for graphics and interactive data.

    Spark overcomes all these limitations and able to do faster processing too on the local disk as well.

    Why Apache Spark?

    Numerous advantages of Spark have made its a market favorite.

    Let’s discuss one by one.

    1. Speed: Extends MapReduce Model to support computations like stream processing and interactive queries.
    2. Single Combination for processes and multiple tools:  Covers multiple workloads ( in a traditional system, it used to require different distributed systems), which makes combining different processing types and ease of tool management.
    3. Unification: Developers have to learn only one platform unlike multiple languages and tools in a traditional system.
    4. Support to different Resource Managers: Spark supports Hadoop HDFS system, and YARN for resource management but it’s not the only resource manager it supports. It works on MESOS and on any standalone scheduler like spark resource manager.
    5. Support for cutting-edge Innovation: Spark provides capabilities and support for an array of new-age technologies ranging from built-in machine learning libraries,   visualization tools, support for near processing (which was in a way the biggest challenge pre- spark era) and supports seamless integration with other deep learning frameworks like TensorFlow. This enables Spark to provide an innovative solution for new age use-cases.

    Spark can access diverse data sources and make sense of them all and hence it’s trending in the market over any other cluster computing software available. 

    Who uses Apache Spark

    Who uses Apache Spark

    Listing a few use cases of Apache spark below :

    1. Analytics - Spark can be very useful when building real-time analytics from a stream of incoming data.

    2. E-commerce - Information about the real-time transaction can be passed to streaming clustering algorithms like alternating least squares or K-means clustering algorithm. The results can be combined with data from other sources like social media profiles, product reviews on forums, customer comments, etc. to enhance the recommendations to customers based on new trends.

    Shopify: At Shopify, we underwrite credit card transactions, exposing us to the risk of losing money. We need to respond to risky events as they happen, and a traditional ETL pipeline just isn’t fast enough. Spark Streaming is an incredibly powerful real-time data processing framework based on Apache Spark. It allows you to process real-time streams like Apache Kafka using Python with incredible simplicity.

    Alibaba: Alibaba Taobao operates one of the world’s largest e-commerce platforms. We collect hundreds of petabytes of data on this platform and use Apache Spark to analyze these enormous amounts of data.

    3. Healthcare Industry –
    Healthcare has multiple use-cases of unstructured data to be processed in real-time. It has data ranging from image formats like scans etc to specific medical industry standards and wearable tracking devices. Many healthcare providers are keen on using spark for patient’s records to build 360 degrees view of the patient to do accurate diagnosis.

    MyFitnessPal: MyFitnessPal needed to deliver a new feature called “Verified Foods.” The feature demanded a faster pipeline to execute a number of highly sophisticated algorithms. Their legacy non-distributed Java-based data pipeline was slow, did not scale, and lacked flexibility.

    Here are a few other examples from industry leaders:

    You can also learn more about use cases of Apache Spark  here.

    Career Benefits:

    Career Benefits of Spark for you as an individual:

    Apache Spark developers earn the highest average salary among all other programmers. According to its  2015 Data Science Salary Survey, O’Reilly found strong correlations between those who used Apache Spark and those who were paid more money. In one of its models, using Spark added more than $11,000 to the median salary.

    If you’re considering switching to this extremely in-demand career then taking up the  Apache Spark training will be an added advantage. Learning Spark will give you a steep competitive edge and can land you up in market best-paying jobs with top companies. Spark has gained enough adherents over the years to place it high on the list of fastest-growing skills; data scientists and sysadmins have evaluated the technology and clearly seen what they liked.  April’s Dice Report explored the fastest-growing technology skills, based on an analysis of job postings and data from Dice’s annual salary survey. The results are below; percentages are based on year-over-year growth in job postings:

    Career Benefits of Apache Spark

    Benefits of Spark implementing Spark in your organization:

    Apache spark is now a decade older but still going strong. Due to lightning-fast processing and numerous other advantages discussed so far, Spark is still the first choice of many organizations.
    Spark is considered to be the most popular open-source project on the planet, with more than 1,000 contributors from 250-plus organizations, according to Databricks.

    Looking to excel in the world of data science? Discover the best certifications for data science and unlock endless opportunities. Boost your career with these unique credentials.


    To sum up, Spark helps to simplify the computationally intensive task of processing high volumes of real-time or batch data. It can seamlessly integrate with complex capabilities such as machine learning and graph algorithms. In short, Spark brings exclusive Big Data processing (which earlier was only for giant companies like Google) to the masses.

    Do let us know how your learning experience was, through comments below.
    Happy Learning!!!


    Dr. Manish Kumar Jain

    International Corporate Trainer

    Dr. Manish Kumar Jain is an accomplished author, international corporate trainer, and technical consultant with 20+ years of industry experience. He specializes in cutting-edge technologies such as ChatGPT, OpenAI, generative AI, prompt engineering, Industry 4.0, web 3.0, blockchain, RPA, IoT, ML, data science, big data, AI, cloud computing, Hadoop, and deep learning. With expertise in fintech, IIoT, and blockchain, he possesses in-depth knowledge of diverse sectors including finance, aerospace, retail, logistics, energy, banking, telecom, healthcare, manufacturing, education, and oil and gas. Holding a PhD in deep learning and image processing, Dr. Jain's extensive certifications and professional achievements demonstrate his commitment to delivering exceptional training and consultancy services globally while staying at the forefront of technology.

    Share This Article
    Ready to Master the Skills that Drive Your Career?

    Avail your free 1:1 mentorship session.

    Your Message (Optional)

    Upcoming Big Data Batches & Dates

    NameDateFeeKnow more
    Course advisor icon
    Course Advisor
    Whatsapp/Chat icon