Search

Top In-demand Jobs During Coronavirus Pandemic

With the global positive cases for the COVID-19 reaching over two crores globally, and over 281,000 jobs lost in the US alone, the impact of the coronavirus pandemic already has been catastrophic for workers worldwide. While tourism and the supply chain industries are the hardest hit, the healthcare and transportation sectors have faced less severe heat. According to a Goldman Sachs report, the number of unemployed individuals in the US can climb up to 2.25 million. However, despite these alarming figures, the NBC News states that this is merely 20% of the total unemployment rate of the US. Job portals like LinkedIn, Shine, and Monster are also witnessing continued hiring for specific roles. So, what are these roles defining the pandemic job sector? Top In-demand Jobs During Coronavirus Pandemic Healthcare specialist For obvious reasons, the demand for healthcare specialists has spiked up globally. This includes doctors, nurses, surgical technologists, virologists, diagnostic technicians, pharmacists, and medical equipment providers. Logistics personnel This largely involves shipping and delivery companies that include a broad profile of employees, right from warehouse managers, transportation-oriented job roles, and packaging and fulfillment jobs. Presently, Amazon is hiring over 1,00,000 workers for its operations while making amends in the salaries and timings to accommodate the situation.  Online learning companies Teaching and learning are at the forefront of the current global scenario. With most of the individuals either working from home or anticipating a loss of a job, several of them are resorting to upskilling or attaining new skills to embrace broader job roles. The demand for teachers or trainers for these courses and academic counselors has also shot up. Remote learning facilities and online upskilling have made these courses much more accessible to individuals as well.  Remote meeting and communication companies The entirety of remote working is heavily dependant on communication and meeting tools such as Zoom, Slack, and Microsoft teams. The efficiency of these tools and the effectivity of managing projects with remote communication has enabled several industries to sustain global pandemic. Even project management is taking an all-new shape thanks to these modern tools. Moreover, several schools are also relying on these tools to continue education through online classes.  Psychologists/Mental health-related businesses Many companies and individuals are seeking help to cope up with the undercurrent. This has created a surge in the demand for psychologists. Businesses like PwC and Starbucks have introduced/enhanced their mental health coaching. Mental health and wellness apps like Headspace have seen a 400% increase in the demand from top companies like Adobe and GE.  Data analysts Hiring companies like Shine have seen a surge in the hiring of data analysts. The simple reason being that there is a constant demand for information about the coronavirus, its status, its impact on the global economy, different markets, and many other industries. Companies are also hiring data analysts rapidly to study current customer behavior and reach out to public sentiments.  How to find a job during the coronavirus pandemicWhether you are looking for a job change, have already faced the heat of the coronavirus, or are at the risk of losing your job, here are some ways to stay afloat despite the trying times.  Be proactive on job portals, especially professional networking sites like LinkedIn to expand your network Practise phone and video job interviews Expand your work portfolio by on-boarding more freelance projects Pick up new skills by leveraging on the online courses available  Stay focused on your current job even in uncertain times Job security is of paramount importance during a global crisis like this. Andrew Seaman, an editor at LinkedIn notes that recruiters are going by the ‘business as usual approach’, despite concerns about COVID-19. The only change, he remarks, is that the interviews may be conducted over a video call, rather than in person. If the outbreak is not contained soon enough though, hiring may eventually take a hit. 
Top In-demand Jobs During Coronavirus Pandemic
KnowledgeHut
Rated 4.5/5 based on 0 customer reviews

Top In-demand Jobs During Coronavirus Pandemic

With the global positive cases for the COVID-19 reaching over two crores globally, and over 281,000 jobs lost in the US alone, the impact of the coronavirus pandemic already has been catastrophic for workers worldwide. While tourism and the supply chain industries are the hardest hit, the healthcare and transportation sectors have faced less severe heat. According to a Goldman Sachs report, the number of unemployed individuals in the US can climb up to 2.25 million. However, despite these alarming figures, the NBC News states that this is merely 20% of the total unemployment rate of the US. Job portals like LinkedIn, Shine, and Monster are also witnessing continued hiring for specific roles. So, what are these roles defining the pandemic job sector? Top In-demand Jobs During Coronavirus Pandemic Healthcare specialist For obvious reasons, the demand for healthcare specialists has spiked up globally. This includes doctors, nurses, surgical technologists, virologists, diagnostic technicians, pharmacists, and medical equipment providers. Logistics personnel This largely involves shipping and delivery companies that include a broad profile of employees, right from warehouse managers, transportation-oriented job roles, and packaging and fulfillment jobs. Presently, Amazon is hiring over 1,00,000 workers for its operations while making amends in the salaries and timings to accommodate the situation.  Online learning companies Teaching and learning are at the forefront of the current global scenario. With most of the individuals either working from home or anticipating a loss of a job, several of them are resorting to upskilling or attaining new skills to embrace broader job roles. The demand for teachers or trainers for these courses and academic counselors has also shot up. Remote learning facilities and online upskilling have made these courses much more accessible to individuals as well.  Remote meeting and communication companies The entirety of remote working is heavily dependant on communication and meeting tools such as Zoom, Slack, and Microsoft teams. The efficiency of these tools and the effectivity of managing projects with remote communication has enabled several industries to sustain global pandemic. Even project management is taking an all-new shape thanks to these modern tools. Moreover, several schools are also relying on these tools to continue education through online classes.  Psychologists/Mental health-related businesses Many companies and individuals are seeking help to cope up with the undercurrent. This has created a surge in the demand for psychologists. Businesses like PwC and Starbucks have introduced/enhanced their mental health coaching. Mental health and wellness apps like Headspace have seen a 400% increase in the demand from top companies like Adobe and GE.  Data analysts Hiring companies like Shine have seen a surge in the hiring of data analysts. The simple reason being that there is a constant demand for information about the coronavirus, its status, its impact on the global economy, different markets, and many other industries. Companies are also hiring data analysts rapidly to study current customer behavior and reach out to public sentiments.  How to find a job during the coronavirus pandemicWhether you are looking for a job change, have already faced the heat of the coronavirus, or are at the risk of losing your job, here are some ways to stay afloat despite the trying times.  Be proactive on job portals, especially professional networking sites like LinkedIn to expand your network Practise phone and video job interviews Expand your work portfolio by on-boarding more freelance projects Pick up new skills by leveraging on the online courses available  Stay focused on your current job even in uncertain times Job security is of paramount importance during a global crisis like this. Andrew Seaman, an editor at LinkedIn notes that recruiters are going by the ‘business as usual approach’, despite concerns about COVID-19. The only change, he remarks, is that the interviews may be conducted over a video call, rather than in person. If the outbreak is not contained soon enough though, hiring may eventually take a hit. 
Rated 4.5/5 based on 0 customer reviews
8419
Top In-demand Jobs During Coronavirus Pandemic

With the global positive cases for the COVID-19 re... Read More

5 Big Data Challenges in 2020

The year 2019 saw some enthralling changes in volume and variety of data across businesses, worldwide. The surge in data generation is only going to continue. Foresighted enterprises are the ones who will be able to leverage this data for maximum profitability through data processing and handling techniques. With the rise in opportunities related to Big Data, challenges are also bound to increase.Below are the 5 major Big Data challenges that enterprises face in 2020:1. The Need for More Trained ProfessionalsResearch shows that since 2018, 2.5 quintillion bytes (or 2.5 exabytes) of information is being generated every day. The previous two years have seen significantly more noteworthy increments in the quantity of streams, posts, searches and writings, which have cumulatively produced an enormous amount of data. Additionally, this number is only growing by the day. A study has predicted that by 2025, each person will be making a bewildering 463 exabytes of information every day.A report by Indeed, showed a 29 percent surge in the demand for data scientists yearly and a 344 percent increase since 2013 till date. However, the searches by job seekers skilled in data science continue to grow at a snail’s pace at 14 percent. In August 2018, LinkedIn reported claimed that US alone needs 151,717 professionals with data science skills. This along with a 15 percent discrepancy between job postings and job searches on Indeed, makes it quite evident that the demand for data scientists outstrips supply. The greatest data processing challenge of 2020 is the lack of qualified data scientists with the skill set and expertise to handle this gigantic volume of data.2. Inability to process large volumes of dataOut of the 2.5 quintillion data produced, only 60 percent workers spend days on it to make sense of it. A major portion of raw data is usually irrelevant. And about 43 percent companies still struggle or aren’t fully satisfied with the filtered data. 3. Syncing Across Data SourcesOnce you import data into Big Data platforms you may also realize that data copies migrated from a wide range of sources on different rates and schedules can rapidly get out of the synchronization with the originating system. This implies two things, one, the data coming from one source is out of date when compared to another source. Two, it creates a commonality of data definitions, concepts, metadata and the like. The traditional data management and data warehouses, and the sequence of data transformation, extraction and migration- all arise a situation in which there are risks for data to become unsynchronized.4. Lack of adequate data governanceData collected from multiple sources should have some correlation to each other so that it can be considered usable by enterprises. In a recent Big Data Maturity Survey, the lack of stringent data governance was recognized the fastest-growing area of concern. Organizations often have to setup the right personnel, policies and technology to ensure that data governance is achieved. This itself could be a challenge for a lot of enterprises.5. Threat of compromised data securityWhile Big Data opens plenty of opportunities for organizations to grow their businesses, there’s an inherent risk of data security. Some of the biggest cyber threats to big players like Panera Bread, Facebook, Equifax and Marriot have brought to light the fact that literally no one is immune to cyberattacks. As far as Big Data is concerned, data security should be high on their priorities as most modern businesses are vulnerable to fake data generation, especially if cybercriminals have access to the database of a business. However, regulating access is one of the primary challenges for companies who frequently work with large sets of data. Even the way Big Data is designed makes it harder for enterprises to ensure data security. Working with data distributed across multiple systems makes it both cumbersome and risky.Overcoming Big Data challenges in 2020Whether it’s ensuring data governance and security or hiring skilled professionals, enterprises should leave no stone unturned when it comes to overcoming the above Big Data challenges. Several courses and online certifications are available to specialize in tackling each of these challenges in Big Data. Training existing personnel with the analytical tools of Big Data will help businesses unearth insightful data about customer. Frameworks related to Big Data can help in qualitative analysis of the raw information.
Rated 4.5/5 based on 0 customer reviews
1261
5 Big Data Challenges in 2020

The year 2019 saw some enthralling changes in volu... Read More

How Big is ‘Big Data’, Anyway?

When I got introduced to the data-world with my first corporate induction training, about 10 years ago. I was then still processing the difference between Data and Information. The following helped me understand the same:Data: It is raw information (unprocessed facts and figures) without any context for e.g. Number 20Information: structured Data grouped together which can have interpretation. E.g $20 for a toy.Knowledge: combination of information, experience and insight that may benefit the individual for the organisation. E.g. $20 for a toy in Black Friday Sale in a mall.Wisdom: Knowledge becomes wisdom when one can assimilate and apply this knowledge to make the right decisions. E.g. One who wants to buy a toy will wait for the Black Friday Sale to get it at a cheaper price.By the time I started understanding above differences, ‘Big data’ term was already making it big and then the obvious question in mind was,” When to call ‘data’ à ‘ Big data’? “I then made an attempt to understand ‘how big is a data to be called big data?’ and here, I have a big revelation to make, for all of you reading this article, that ‘Big Data’ is actually misleading term and it is irrelevant with “Bigness of data” but it is to be used in relevance. In fact, it is a term which needs to be understood, only in perspective.The simplest one I could find relevant is, Big data is the data that cannot be stored with traditional storages, cannot be processed with traditional methods/ways and within a short period of time (and these references would still be valid as time advances.) but this is not textbook or only definition of big data. Interestingly, One who finds one set of data as big data can be traditional data for others so truly it cannot be bounded in words but loosely can be described through numerous examples. I am sure by the end of the article you will be able to answer the question for yourself. Let’s start.Do you know? - NASA researchers Michael Cox and David Ellsworth use the term “big data” for the first time to describe a familiar challenge in the 1990s supercomputers generating massive amounts of information - in Cox and Ellsworth’s case, simulations of airflow around aircraft - that cannot be processed and visualized.If you go through a brief history of big data, you would know data which is not fitting into memory or disk was called ‘Big data problem’ back in 1997.As the years passed by innovations were on rising and disruptions were made so the data universe is growing all the time. Let’s understand a few widely available and stated statistics for ‘big data’ (Collected around 2017 or before) >>On average, people send about 500 million tweets per day.Snapchat users share 527,760 photos in a minute Instagram users post 46,740 photos in a minute More than 120 professionals join LinkedIn in a minute Users watch 4,146,600 YouTube videos in a minuteThe average U.S. customer uses 1.8 gigabytes of data per month on his or her cell phone plan.Amazon sells 600 items per second.On average, each person who uses email receives 88 emails per day and send 34. That adds up to more than 200 billion emails each day.MasterCard processes 74 billion transactions per year.Commercial airlines make about 5,800 flights per day.You might be interested to read through Domo’s Data Never Sleeps 5.0 report, for the numbers generated every minute of the day.Understanding that the above stats are probably about 1.5-2 years older and data is ever-growing, it helps to establish the fact that ‘big data‘ is a moving target and…. In short,“Today’s big data is tomorrow’s small data.”Now that we have some knowledge about transactions/tweets/snaps in a day, Let’s also understand how much data, all these “One-minute Quickies” are generating. Let’s talk about some volumes too. Afterall volumes are one of the characteristics of big data but mind you, not only characteristic of big data. It is believed that, In a single day, the world produces 2.5 quintillion bytes (2.3 trillion gigabytes) of data, in layman's terms, this is the equivalent of everyone in the world downloading 60 episodes of Breaking Bad, in HD, 20 times! [Source: VCloud 2012] and According to estimates, the volume of data worldwide doubles every 1.2 years.IDC predicts that the collective sum of the world's data will grow from 33 zettabytes this year to a 175ZB by 2025, for a compounded annual growth rate of 61 per cent. The 175ZB figure represents a 9 per cent increase over last year's prediction of data growth by 2025 – As per the report published in Dec’2018.But, do you know: how much would be 1 zettabyte of data? Let’s understand. One zettabyte is equal to one sextillion bytes or 1021 (1,000,000,000,000,000,000,000) bytes or, one zettabyte is roughly equal to a trillion gigabytes.Fun Fact: There is a legit term coined as The Zettabyte Era (Today’s Era).The Zettabyte Era can also be understood as an age of growth of all forms of digital data that exist in the world which includes the public Internet, but also all other forms of digital data such as stored data from security cameras or voice data from cell-phone calls.You must check out this infographic by economywatch (taken from SearchEngineJournal) to understand how much data zettabyte consists of, putting it into context with current data storage capabilities and usage.Today’s ‘big data’ is generated from majority 3 sources i.e.People Generated: Social media uploads, Mails etc. Machine Generated: M2M (machine to machine) interactions, IOT devices etc. Business Generated: Data generated and stored into today’s OLTPs, OLAPs, Data warehouses, data marts, reports, operational data throughout the enterprise/organization.Various analytics tools available in the market today, help in solving big data challenges by providing ways for storing this data, process this data and make valuable insights from this data.As we discussed, big data is moving target as time advances, it is also interesting to know even today, data which is not of huge size but is difficult to process and of relatively smaller volume would still be categorized as Big Data. For example, unstructured data in emails, from social media platforms, data which is required to process with real-time/near real-time etc. all the examples we have seen so far, all of it is big data.   But, It would be a mistake to assume that, Big Data only as data that is analyzed using Hadoop, Spark or another complex analytics platform. As big data is moving the target and it’s ever-growing, also with various disruptive sources of data are being introduced every day, to process this data newer tools would be invented, and hence big data cannot just remain a function of tools being used to analyze it. To conclude, as discussed at the starting of the article, it would still be appropriate and reasonable to say, this moving target of big data which would always be challenged for storage, processing methods and process it within a short period as well. So big data is a function of volume and/or time and/or storage and/or variety. It was fun and exciting to know what different aspects are hidden in ‘BIG DATA’ word and I thoroughly enjoyed solving this mystery.Did you enjoy solving it too?Do let us know how was experience through comments below.Happy Learning!!!
Rated 4.5/5 based on 23 customer reviews
14131
How Big is ‘Big Data’, Anyway?

When I got introduced to the data-world with my fi... Read More

Apache Spark Pros and Cons

Apache Spark:  The New ‘King’ of Big DataApache Spark is a lightning-fast unified analytics engine for big data and machine learning. It is the largest open-source project in data processing. Since its release, it has met the enterprise’s expectations in a better way in regards to querying, data processing and moreover generating analytics reports in a better and faster way. Internet substations like Yahoo, Netflix, and eBay, etc have used Spark at large scale. Apache Spark is considered as the future of Big Data Platform.Pros and Cons of Apache SparkApache SparkAdvantagesDisadvantagesSpeedNo automatic optimization processEase of UseFile Management SystemAdvanced AnalyticsFewer AlgorithmsDynamic in NatureSmall Files IssueMultilingualWindow CriteriaApache Spark is powerfulDoesn’t suit for a multi-user environmentIncreased access to Big data-Demand for Spark Developers-Apache Spark has transformed the world of Big Data. It is the most active big data tool reshaping the big data market. This open-source distributed computing platform offers more powerful advantages than any other proprietary solutions. The diverse advantages of Apache Spark make it a very attractive big data framework. Apache Spark has huge potential to contribute to the big data-related business in the industry. Let’s now have a look at some of the common benefits of Apache Spark:Benefits of Apache Spark:SpeedEase of UseAdvanced AnalyticsDynamic in NatureMultilingualApache Spark is powerfulIncreased access to Big dataDemand for Spark DevelopersOpen-source community1. Speed:When comes to Big Data, processing speed always matters. Apache Spark is wildly popular with data scientists because of its speed. Spark is 100x faster than Hadoop for large scale data processing. Apache Spark uses in-memory(RAM) computing system whereas Hadoop uses local memory space to store data. Spark can handle multiple petabytes of clustered data of more than 8000 nodes at a time. 2. Ease of Use:Apache Spark carries easy-to-use APIs for operating on large datasets. It offers over 80 high-level operators that make it easy to build parallel apps.The below pictorial representation will help you understand the importance of Apache Spark.3. Advanced Analytics:Spark not only supports ‘MAP’ and ‘reduce’. It also supports Machine learning (ML), Graph algorithms, Streaming data, SQL queries, etc.4. Dynamic in Nature:With Apache Spark, you can easily develop parallel applications. Spark offers you over 80 high-level operators.5. Multilingual:Apache Spark supports many languages for code writing such as Python, Java, Scala, etc.6. Apache Spark is powerful:Apache Spark can handle many analytics challenges because of its low-latency in-memory data processing capability. It has well-built libraries for graph analytics algorithms and machine learning.7. Increased access to Big data:Apache Spark is opening up various opportunities for big data and making As per the recent survey conducted by IBM’s announced that it will educate more than 1 million data engineers and data scientists on Apache Spark. 8. Demand for Spark Developers:Apache Spark not only benefits your organization but you as well. Spark developers are so in-demand that companies offering attractive benefits and providing flexible work timings just to hire experts skilled in Apache Spark. As per PayScale the average salary for  Data Engineer with Apache Spark skills is $100,362. For people who want to make a career in the big data, technology can learn Apache Spark. You will find various ways to bridge the skills gap for getting data-related jobs, but the best way is to take formal training which will provide you hands-on work experience and also learn through hands-on projects.9. Open-source community:The best thing about Apache Spark is, it has a massive Open-source community behind it. Apache Spark is Great, but it’s not perfect - How?Apache Spark is a lightning-fast cluster computer computing technology designed for fast computation and also being widely used by industries. But on the other side, it also has some ugly aspects. Here are some challenges related to Apache Spark that developers face when working on Big data with Apache Spark.Let’s read out the following limitations of Apache Spark in detail so that you can make an informed decision whether this platform will be the right choice for your upcoming big data project.No automatic optimization processFile Management SystemFewer AlgorithmsSmall Files IssueWindow CriteriaDoesn’t suit for a multi-user environment1. No automatic optimization process:In the case of Apache Spark, you need to optimize the code manually since it doesn’t have any automatic code optimization process. This will turn into a disadvantage when all the other technologies and platforms are moving towards automation.2. File Management System:Apache Spark doesn’t come with its own file management system. It depends on some other platforms like Hadoop or other cloud-based platforms.3. Fewer Algorithms:There are fewer algorithms present in the case of Apache Spark Machine Learning Spark MLlib. It lags behind in terms of a number of available algorithms.4. Small Files Issue:One more reason to blame Apache Spark is the issue with small files. Developers come across issues of small files when using Apache Spark along with Hadoop. Hadoop Distributed File System (HDFS) provides a limited number of large files instead of a large number of small files.5. Window Criteria:Data in Apache Spark divides into small batches of a predefined time interval. So Apache won't support record-based window criteria. Rather, it offers time-based window criteria.6. Doesn’t suit for a multi-user environment:Yes, Apache Spark doesn’t fit for a multi-user environment. It is not capable of handling more users concurrency.Conclusion:To sum up, in light of the good, the bad and the ugly, Spark is a conquering tool when we view it from outside. We have seen a drastic change in the performance and decrease in the failures across various projects executed in Spark. Many applications are being moved to Spark for the efficiency it offers to developers. Using Apache Spark can give any business a boost and help foster its growth. It is sure that you will also have a bright future!
Rated 4.5/5 based on 19 customer reviews
8912
Apache Spark Pros and Cons

Apache Spark:  The New ‘King’ of Big DataApac... Read More

Fundamentals of Apache Spark

IntroductionBefore getting into the fundamentals of Apache Spark, let’s understand What really is ‘Apache Spark’ is? Following is the authentic one-liner definition.Apache Spark is a fast and general-purpose, cluster computing system.One would find multiple definitions when you search the term Apache Spark. All of those give similar gist, just different words. Let’s understand these special keywords which describe Apache Spark. Fast: As spark uses in-memory computing it’s fast. It can run queries 100x faster. We will get to details of architecture later to understand this aspect better little later in the article. One would find the keywords ‘Fast’ and/or ‘In-memory’ in all the definitions. General Purpose: Apache spark is a unified framework. It provides one execution model for all tasks and hence very easy for developers to learn and they can work with multiple APIs easily. Spark offers over 80 high-level operators that make it easy to build parallel apps and one can use it interactively from the Scala, Python, R, and SQL shells.Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.Cluster Computing: Efficient processing of data on Set of computers (Refer commodity hardware here) or distributed systems. It’s also called a Parallel Data processing Engine in a few definitions. Spark is utilized for Big data analytics and related processing. One more important keyword associated with Spark is Open Source. It was open-sourced in 2010 under a   BSD license.Spark (and its RDD) was developed(earliest version as it’s seen today), in 2012, in response to limitations in the   MapReduce cluster computing paradigm. Spark is commonly seen as an in-memory replacement of MapReduce.Since its release, Apache Spark has seen rapid adoption due to its characteristics briefly discussed above.Who should go for Apache SparkBefore trying to find out whether Apache spark is for me? Or whether I have the right skill set, It's important to focus on the generality characteristic in further depth.Apache Spark consists of Spark Core and a set of libraries. The core is the distributed execution engine and the Java, Scala, and Python APIs offer a platform for distributed ETL application development. Additional libraries, built atop the core, allow diverse workloads for streaming, SQL, and machine learning.As Spark provides these multiple components, it’s evident that Spark is developed and widely utilized for big data and analytics.  Professionals who should learn Apache SparkIf one is aspiring to be landed into the following professions or anyone who has an interest in data and insights, Knowledge of spark will prove useful:Data ScientistsData EngineersPrerequisites of learning Apache SparkMost of the students looking for big data training, Apache spark is number one framework in big data. So most of the knowledge seekers looking for spark training, it is important to note that there are few prerequisites to learn apache spark.Before getting into Big data, you must have minimum knowledge on:Anyone of the programming languages >> Core   Python or Scala.Spark installations can be done on any platform but its framework is similar to Hadoop and hence having knowledge of HDFS and YARN is highly recommended. Having knowledge of Hive is an added advantage but is not mandatory.Basic knowledge of SQL. In SQL mainly select * from, joins and group by these three commands highly recommended.Optionally, knowing any cloud technology like AWS. Recommended for those who want to work with production-like environments.System requirements of Apache SparkOfficial site for  Apache Spark gives following recommendation (Traverse link for further details)Storage System: There are few ways to set this up as follows: Spark can run on the same node as HDFS. Spark standalone node cluster can be installed on the same nodes and configure Spark and Hadoop memory and CPU usage accordingly to avoid any interference.Or,1. Hadoop and Spark can execute on common Resource Manager ( Ex. Yarn etc)Or,2. Spark will be executing in same Local Area Network as HDFS but on separate nodes.Or3. If a requirement is a quick response and low latency from data stores then execute compute jobs on separate nodes than that of storage nodes.Local Disks: Typically 4-8 disks per node, configured without RAID.If underline OS is Linux then mount the disk with noatime option and in Spark environment configure spark.local.dir variable to be a comma-separated list of local disks.Note: For HDFS, it can be the same disk as HDFS.Memory: Minimum 8GB - 100s of GBs of memory per machine.A recommendation is the allocation of 75% of the memory to Spark.Network: 10GB or faster speed network.CPU cores: 8-16 Cores per machineHowever, for Training and Learning purpose and just to taste Spark, following are two available options: Run it locally Use AWS EMR (Or any cloud computing service)For learning purposes, minimum 4gb ram system with minimum 30gb disk may prove enough.History of Apache SparkSpark was primarily developed to Overcome the Limitations of MapReduce.Versioning: Spark initial version was version 0, version 1.6 is assumed to be a stable version and is being used in multiple commercial corporate projects. Version 2.3 is the latest available version. MapReduce is cluster computing  paradigm, which forces a particular linear  data flow structure on distributed programs: MapReduce programs read input data from disk,  map a function across the data,  reduce the results of the map, and store reduction results on disk. Due to multiple copies of data and multiple I/O as described, MapReduce takes lots of time to process the volume of data. MapReduce can do only batch time processing and is unsuitable for real-time data processingIt is unsuitable for trivial join like transformations. It’s unfit for large data on a network and also with OLTP data.Also, not suitable for graphics and interactive data.Spark overcomes all these limitations and able to do faster processing too on the local disk as well.Why Apache Spark?Numerous advantages of Spark have made its a market favorite.Let’s discuss one by one.Speed: Extends MapReduce Model to support computations like stream processing and interactive queries.Single Combination for processes and multiple tools:  Covers multiple workloads ( in a traditional system, it used to require different distributed systems), which makes combining different processing types and ease of tool management.Unification: Developers have to learn only one platform unlike multiple languages and tools in a traditional system.Support to different Resource Managers: Spark supports Hadoop HDFS system, and YARN for resource management but it’s not the only resource manager it supports. It works on MESOS and on any standalone scheduler like spark resource manager.Support for cutting-edge Innovation: Spark provides capabilities and support for an array of new-age technologies ranging from built-in machine learning libraries,   visualization tools, support for near processing (which was in a way the biggest challenge pre- spark era) and supports seamless integration with other deep learning frameworks like TensorFlow. This enables Spark to provide an innovative solution for new age use-cases.Spark can access diverse data sources and make sense of them all and hence it’s trending in the market over any other cluster computing software available. Who uses Apache SparkListing a few use cases of Apache spark below :1. Analytics - Spark can be very useful when building real-time analytics from a stream of incoming data.2. E-commerce - Information about the real-time transaction can be passed to streaming clustering algorithms like alternating least squares or K-means clustering algorithm. The results can be combined with data from other sources like social media profiles, product reviews on forums, customer comments, etc. to enhance the recommendations to customers based on new trends.Shopify: At Shopify, we underwrite credit card transactions, exposing us to the risk of losing money. We need to respond to risky events as they happen, and a traditional ETL pipeline just isn’t fast enough. Spark Streaming is an incredibly powerful real-time data processing framework based on Apache Spark. It allows you to process real-time streams like Apache Kafka using Python with incredible simplicity.Alibaba: Alibaba Taobao operates one of the world’s largest e-commerce platforms. We collect hundreds of petabytes of data on this platform and use Apache Spark to analyze these enormous amounts of data.3. Healthcare Industry –Healthcare has multiple use-cases of unstructured data to be processed in real-time. It has data ranging from image formats like scans etc to specific medical industry standards and wearable tracking devices. Many healthcare providers are keen on using spark for patient’s records to build 360 degrees view of the patient to do accurate diagnosis.MyFitnessPal: MyFitnessPal needed to deliver a new feature called “Verified Foods.” The feature demanded a faster pipeline to execute a number of highly sophisticated algorithms. Their legacy non-distributed Java-based data pipeline was slow, did not scale, and lacked flexibility.Here are a few other examples from industry leaders:Regeneron: Future of Drug Discovery with Genomics at Scale powered by SparkZeiss: Using Spark Structured Streaming for Predictive MaintenanceDevon Energy: Scaling Geographic Analytics with Spark GraphXYou can also learn more about use cases of Apache Spark  here.Career Benefits:Career Benefits of Spark for you as an individual:Apache Spark developers earn the highest average salary among all other programmers. According to its  2015 Data Science Salary Survey, O’Reilly found strong correlations between those who used Apache Spark and those who were paid more money. In one of its models, using Spark added more than $11,000 to the median salary.If you’re considering switching to this extremely in-demand career then taking up the  Apache Spark training will be an added advantage. Learning Spark will give you a steep competitive edge and can land you up in market best-paying jobs with top companies. Spark has gained enough adherents over the years to place it high on the list of fastest-growing skills; data scientists and sysadmins have evaluated the technology and clearly seen what they liked.  April’s Dice Report explored the fastest-growing technology skills, based on an analysis of job postings and data from Dice’s annual salary survey. The results are below; percentages are based on year-over-year growth in job postings:Benefits of Spark implementing Spark in your organization:Apache spark is now a decade older but still going strong. Due to lightning-fast processing and numerous other advantages discussed so far, Spark is still the first choice of many organizations.Spark is considered to be the most popular open-source project on the planet, with more than 1,000 contributors from 250-plus organizations, according to Databricks.ConclusionTo sum up, Spark helps to simplify the computationally intensive task of processing high volumes of real-time or batch data. It can seamlessly integrate with complex capabilities such as machine learning and graph algorithms. In short, Spark brings exclusive Big Data processing (which earlier was only for giant companies like Google) to the masses.Do let us know how your learning experience was, through comments below.Happy Learning!!!
Rated 4.5/5 based on 13 customer reviews
9924
Fundamentals of Apache Spark

IntroductionBefore getting into the fundamentals o... Read More

Apache Kafka Vs Apache Spark: Know the Differences

A new breed of ‘Fast Data’ architectures has evolved to be stream-oriented, where data is processed as it arrives, providing businesses with a competitive advantage. - Dean Wampler (Renowned author of many big data technology-related books)Dean Wampler makes an important point in one of his webinars. The demand for stream processing is increasing every day in today’s era. The main reason behind it is, processing only volumes of data is not sufficient but processing data at faster rates and making insights out of it in real time is very essential so that organization can react to changing business conditions in real time.And hence, there is a need to understand the concept “stream processing “and technology behind it. So, what is Stream Processing?Think of streaming as an unbounded, continuous real-time flow of records and processing these records in similar timeframe is stream processing. AWS (Amazon Web Services) defines “Streaming Data” is data that is generated continuously by thousands of data sources, which typically send in the data records simultaneously, and in small sizes (order of Kilobytes). This data needs to be processed sequentially and incrementally on a record-by-record basis or over sliding time windows and used for a wide variety of analytics including correlations, aggregations, filtering, and sampling.In stream processing method, continuous computation happens as the data flows through the system.Stream processing is highly beneficial if the events you wish to track are happening frequently and close together in time. It is also best to utilize if the event needs to be detected right away and responded to quickly.There is a subtle difference between stream processing, real-time processing (Rear real-time) and complex event processing (CEP). Let’s quickly look at the examples to understand the difference. Stream Processing: Stream processing is useful for tasks like fraud detection and cybersecurity. If transaction data is stream-processed, fraudulent transactions can be identified and stopped before they are even complete.Real-time Processing: If event time is very relevant and latencies in the second's range are completely unacceptable then it’s called Real-time (Rear real-time) processing. For ex. flight control system for space programsComplex Event Processing (CEP): CEP utilizes event-by-event processing and aggregation (for example, on potentially out-of-order events from a variety of sources, often with large numbers of rules or business logic).We have multiple tools available to accomplish above-mentioned Stream, Realtime or Complex event Processing. Spark Streaming, Kafka Stream, Flink, Storm, Akka, Structured streaming are to name a few. We will try to understand Spark streaming and Kafka stream in depth further in this article. As historically, these are occupying significant market share. Apache Kafka Stream: Kafka is actually a message broker with a really good performance so that all your data can flow through it before being redistributed to applications. Kafka works as a data pipeline.Typically, Kafka Stream supports per-second stream processing with millisecond latency.  Kafka Streams is a client library for processing and analyzing data stored in Kafka. Kafka streams can process data in 2 ways. Kafka -> Kafka: When Kafka Streams performs aggregations, filtering etc. and writes back the data to Kafka, it achieves amazing scalability, high availability, high throughput etc.  if configured correctly. It also does not do mini batching, which is “real streaming”.Kafka -> External Systems (‘Kafka -> Database’ or ‘Kafka -> Data science model’): Typically, any streaming library (Spark, Flink, NiFi etc) uses Kafka for a message broker. It would read the messages from Kafka and then break it into mini time windows to process it further. Representative view of Kafka streaming: Note:Sources here could be event logs, webpage events etc. etc. DB/Models would be accessed via any other streaming application, which in turn is using Kafka streams here. Kafka Streams is built upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, and simple (yet efficient) management of application state. It is based on many concepts already contained in Kafka, such as scaling by partitioning.Also, for this reason, it comes as a lightweight library that can be integrated into an application.The application can then be operated as desired, as mentioned below: Standalone, in an application serverAs a Docker container, or Directly, via a resource manager such as Mesos.Why one will love using dedicated Apache Kafka Streams?Elastic, highly scalable, fault-tolerantDeploy to containers, VMs, bare metal, cloudEqually viable for small, medium, & large use casesFully integrated with Kafka securityWrite standard Java and Scala applicationsExactly-once processing semanticsNo separate processing cluster requiredDevelop on Mac, Linux, WindowsApache Spark Streaming:Spark Streaming receives live input data streams, it collects data for some time, builds RDD, divides the data into micro-batches, which are then processed by the Spark engine to generate the final stream of results in micro-batches. Following data flow diagram explains the working of Spark streaming. Spark Streaming provides a high-level abstraction called discretized stream or DStream, which represents a continuous stream of data. DStreams can be created either from input data streams from sources such as Kafka, Flume, and Kinesis, or by applying high-level operations on other DStreams. Internally, a DStream is represented as a sequence of RDDs. Think about RDD as the underlying concept for distributing data over a cluster of computers. Why one will love using Apache Spark Streaming?It makes it very easy for developers to use a single framework to satisfy all the processing needs. They can use MLib (Spark's machine learning library) to train models offline and directly use them online for scoring live data in Spark Streaming. In fact, some models perform continuous, online learning, and scoring.Not all real-life use-cases need data to be processed at real real-time, few seconds delay is tolerated over having a unified framework like Spark Streaming and volumes of data processing. It provides a range of capabilities by integrating with other spark tools to do a variety of data processing.  Spark Streaming Vs Kafka StreamNow that we have understood high level what these tools mean, it’s obvious to have curiosity around differences between both the tools. Following table briefly explain you, key differences between the two. Sr.NoSpark streamingKafka Streams1Data received form live input data streams is Divided into Micro-batched for processing.processes per data stream(real real-time)2Separated processing Cluster is requriedNo separated processing cluster is requried.3Needs re-configuration for Scaling Scales easily by just adding java processes, No reconfiguration requried.4At least one semanticsExactly one semantics5Spark streaming is better at processing group of rows(groups,by,ml,window functions etc.)Kafka streams provides true a-record-at-a-time processing capabilities. it's better for functions like rows parsing, data cleansing etc.6Spark streaming is standalone framework.Kafka stream can be used as part of microservice,as it's just a library.Kafka streams Use-cases:Following are a couple of many industry Use cases where Kafka stream is being used: The New York Times: The New York Times uses Apache Kafka and Kafka Streams to store and distribute, in real-time, published content to the various applications and systems that make it available to the readers.Pinterest: Pinterest uses Apache Kafka and the Kafka Streams at large scale to power the real-time, predictive budgeting system of their advertising infrastructure. With Kafka Streams, spend predictions are more accurate than ever.Zalando: As the leading online fashion retailer in Europe, Zalando uses Kafka as an ESB (Enterprise Service Bus), which helps us in transitioning from a monolithic to a micro services architecture. Using Kafka for processing event streams enables our technical team to do near-real time business intelligence.Trivago: Trivago is a global hotel search platform. We are focused on reshaping the way travellers search for and compare hotels while enabling hotel advertisers to grow their businesses by providing access to a broad audience of travellers via our websites and apps. As of 2017, we offer access to approximately 1.8 million hotels and other accommodations in over 190 countries. We use Kafka, Kafka Connect, and Kafka Streams to enable our developers to access data freely in the company. Kafka Streams powers parts of our analytics pipeline and delivers endless options to explore and operate on the data sources we have at hand.Broadly, Kafka is suitable for microservices integration use cases and have wider flexibility.Spark Streaming Use-cases:Following are a couple of the many industries use-cases where spark streaming is being used: Booking.com: We are using Spark Streaming for building online Machine Learning (ML) features that are used in Booking.com for real-time prediction of behaviour and preferences of our users, demand for hotels and improve processes in customer support. Yelp: Yelp’s ad platform handles millions of ad requests every day. To generate ad metrics and analytics in real-time, they built the ad event tracking and analyzing pipeline on top of Spark Streaming. It allows Yelp to manage a large number of active ad campaigns and greatly reduce over-delivery. It also enables them to share ad metrics with advertisers in a timelier fashion.Spark Streaming’s ever-growing user base consists of household names like Uber, Netflix, and Pinterest.Broadly, spark streaming is suitable for requirements with batch processing for massive datasets, for bulk processing and have use-cases more than just data streaming. Dean Wampler explains factors to evaluation for tool basis Use-cases beautifully, as mentioned below: Sr.NoEvaluation CharacteristicResponse Time windowTypical Use Case Requirement1.Latency tolerancePico to Microseconds (Real Real time)Flight control system for space programs etc.Latency tolerance< 100 MicrosecondsRegular stock trading market transactions, Medical diagnostic equipment outputLatency tolerance< 10 millisecondsCredit cards verification window when consumer buy stuff onlineLatency tolerance< 100 millisecondshuman attention required Dashboards, Machine learning modelsLatency tolerance< 1 second to minutesMachine learning model trainingLatency tolerance1 minute and abovePeriodic short jobs(typical ETL applications)2.Evaluation CharacteristicTransaction/events frequencyTypical Use Case RequirementVelocity1M per secondNest Thermostat, Big spikes during specific time period.3Evaluation CharacteristicTypes of data processingNAData Processing Requirement1. SQLNA2. ETL3. Dataflow4. Training and/or Serving Machine learning modelsData Processing Requirement1. Bulk data processingNA2. Individual Events/Transaction processing4.Evaluation CharacteristicUse of toolNAFlexibility of implementation1. Kafka : flexible as provides library.NA2. Spark: Not flexible as it’s part of a distributed frameworkConclusionKafka Streams is still best used in a ‘Kafka -> Kafka’ context, while Spark Streaming could be used for a ‘Kafka -> Database’ or ‘Kafka -> Data science model’ type of context.Although, when these 2 technologies are connected, they bring complete data collection and processing capabilities together and are widely used in commercialized use cases and occupy significant market share. 
Rated 4.5/5 based on 19 customer reviews
10235
Apache Kafka Vs Apache Spark: Know the Differences

A new breed of ‘Fast Data’ architectures has e... Read More