Search

What is Big Data — An Introductory Guide

The massive world of Big DataIf one strolls around any IT office premises, over every decade (nowadays time span is even lesser, almost every 3-4 years) one would overhear professionals discussing new jargons from the hottest trends in technology. Around 5 -6 years ago, one such word has started ruling IT services is ‘BIG data’ and still has been interpreted by a layman to tech geeks in various ways.Although services industries started talking about big data solutions widely from 5-6 years, it is believed that the term was in use since the 1990s by John Mashey from Silicon Graphics, whereas credit for coining the term ‘big data’ aligning to its modern definition goes to Roger Mougalas from O’Reilly Media in 2005.Let’s first understand why everyone going gaga about ‘BIG data’ and what are the real-world problems it is supposed to solve and then we will try to answer what and how aspects of it.Why is Big Data essential for today’s digital world?Pre smart-phones era, internet and web world were around for many years, but smart-phones made it mobile with on-the-go usage. Social Media, mobile apps started generating tons of data. At the same time, smart-bands, wearable devices ( IoT, M2M ), have given newer dimensions for data generation. This newly generated data became a new oil to the world. If this data is stored and analyzed, it has the potential to give tremendous insights which could be put to use in numerous ways.You will be amazed to see the real-world use cases of BIG data. Every industry has a unique use case and is even unique to every client who is implementing the solutions. Ranging from data-driven personalized campaigning (you do see that item you have browsed on some ‘xyz’ site onto Facebook scrolling, ever wondered how?) to predictive maintenance of huge pipes across countries carrying oils, where manual monitoring is practically impossible. To relate this to our day to day life, every click, every swipe, every share and every like we casually do on social media is helping today’s industries to take future calculated business decisions. How do you think Netflix predicted the success of ‘House of Cards’ and spent $100 million on the same? Big data analytics is the simple answer.Talking about all this, the biggest challenge in the past was traditional methods used to store, curate and analyze data, which had limitations to process this data generated from newer sources and which were huge in volumes generated from heterogeneous sources and was being generated  really fast(To give you an idea, roughly 2.5 quintillion data is generated per day as on today – Refer infographic released by Domo called “Data Never Sleeps 5.0.” ), Which given rise to term BIG data and related solutions.Understanding Big Data: Experts’ viewpoint BIG data literally means Massive data (loosely > 1TB) but that’s not the only aspect of it. Distributed data or even complex datasets which could not be analyzed through traditional methods can be categorized into ‘Big data’ and hence Big data theoretical definition makes a lot of sense with this background:“Gartner (2012) defines, Big data is high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”Generic data possessing characteristics of big data are 3Vs namely Variety, Velocity, and VolumeBut due to the changing nature of data in today’s world and to gain most insights of it, 3 more Vs are added to the definition of BIG DATA, namely Variability, Veracity and Value.The diagram below illustrates each V in detail:Diagram: 6 V’s of Big DataThis 6Vs help understanding the characteristics of “BIG Data” but let’s also understand types of data in BIG Data processing.  “Variety” of above characteristics caters to different types of data can be processed through big data tools and technologies. Let’s drill down a bit for understanding what those are:Structured ex. Mainframes, traditional databases like Teradata, Netezza, Oracle, etc.Unstructured ex. Tweets, Facebook posts, emails, etc.Semi/Multi structured or Hybrid ex. E-commerce, demographic, weather data, etc.As the technology is advancing, the variety of data is available and its storage, processing, and analysis are made possible by big data. Traditional data processing techniques were able to process only structured data.Now, that we understand what big data and limitations of old traditional techniques are of handling such data, we could safely say, we need new technology to handle this data and gain insights out of it. Before going further, do you know, what were the traditional data management techniques?Traditional Techniques of Data Processing are:RDBMS (Relational Database Management System)Data warehousing and DataMartOn a high level, RDBMS catered to OLTP needs and data warehousing/DataMart facilitated OLAP needs. But both the systems work with structured data.I hope. now one can answer, ‘what is big data?’ conceptually and theoretically both.So, it’s time that we understand how it is being done in actual implementations.only storing of “big data” will not help the organizations, what’s important is to turn data into insights and business value and to do so, following are the key infrastructure elements:Data collectionData storageData analysis andData visualization/outputAll major big data processing framework offerings are based on these building blocks.And in an alignment of the above building blocks, following are the top 5 big data processing frameworks that are currently being used in the market:1. Apache Hadoop : Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.First up is the all-time classic, and one of the top frameworks in use today. So prevalent is it, that it has almost become synonymous with Big Data.2 Apache Spark : unified analytics engine for large-scale data processing.Apache Spark and Hadoop are often contrasted as an "either/or" choice,  but that isn't really the case.Above two frameworks are popular but apart from that following 3 are available and are comparable frameworks:3. Apache Storm : free and open source distributed real-time computation system. You can also take up Apache Storm training to learn more about Apache Storm.4. Apache Flink : streaming dataflow engine, aiming to provide facilities for distributed computation over streams of data. Treating batch processes as a special case of streaming data, Flink is effectively both batch and real-time processing framework, but one which clearly puts streaming first.5. Apache Samza : distributed Stream processing framework.Frameworks help processing data through building blocks and generate required insights. The framework is supported by the whopping number of tools providing the required functionality.Big Data processing frameworks and technology landscapeBig data tools and technology landscape can be better understood with layered big data architecture. Give a good read to a great article by Navdeep singh Gill on XENONSTACK for understanding the layered architecture of big data.By taking inspiration from layered architecture, different available tools in the market are mapped to layers to understand big data technology landscape in depth. Note that, layered architecture fits very well with infrastructure elements/building blocks discussed in the above section.Few of the tools are briefed below for further understanding:  1. Data Collection / Ingestion Layer Cassandra: is a free and open-source, distributed, wide column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failureKafka: is used for building real-time data pipelines and streaming apps. Event streaming platformFlume: log collector in HadoopHBase: columnar database in Hadoop2. Processing Layer Pig: scripting language in the Hadoop frameworkMapReduce: processing language in Hadoop3. Data Query Layer Impala: Cloudera Impala:  modern, open source, distributed SQL query engine for Apache Hadoop. (often compared with hive)Hive: Data Warehouse software for data Query and analysisPresto: Presto is a high performance, distributed SQL query engine for big data. Its architecture allows users to query a variety of data sources such as Hadoop, AWS S3, Alluxio, MySQL, Cassandra, Apache Kafka, and MongoDB4. Analytical EngineTensorFlow: n source machine learning library for research and production.5. Data storage LayerIgnite: open-source distributed database, caching and processing platform designed to store and compute on large volumes of data across a cluster of nodesPhoenix: hortonworks: Apache Phoenix is an open source, massively parallel, relational database engine supporting OLTP for Hadoop using Apache HBase as its backing storePolyBase: s a new feature in SQL Server 2016. It is used to query relational and non-relational databases (NoSQL). You can use PolyBase to query tables and files in Hadoop or in Azure Blob Storage. You can also import or export data to/from Hadoop.Sqoop: ETL toolBig data in EXCEL: Few people like to process big datasets with current excel capabilities and it's known as Big Data in Excel6. Data Visualization LayerMicrosoft HDInsight: Azure HDInsight is a Hadoop service offering hosted in Azure that enables clusters of managed Hadoop instances. Azure HDInsight deploys and provisions Apache Hadoop clusters in the cloud, providing a software framework designed to manage, analyze, and report on big data with high reliability and availability. Hadoop administration training will give you all the technical understanding required to manage a Hadoop cluster, either in a development or a production environment.Best Practices in Big Data  Every organization, industry, business, may it be small or big wants to get benefit out of “big data” but it's essential to understand that it can prove of maximum potential only if organization adhere to best practices before adapting big data:Answering 5 basic questions help clients know the need for adapting Big Data for organizationTry to answer why Big Data is required for the organization. What problem would it help solve?Ask the right questions.Foster collaboration between business and technology teams.Analyze only what is required to use.Start small and grow incrementally.Big Data industry use-cases We talked about all the things in the Big Data world except real use cases of big data. In the starting, we did discuss few but let me give you insights into the real world and interesting big data use cases and for a few, it’s no longer a secret ☺. In fact, it’s penetrating to the extent you name the industry and plenty of use cases can be told. Let’s begin.1. Streaming PlatformsAs I had given an example of ‘House of Cards’ at the start of the article, it’s not a secret that Netflix uses Big Data analytics. Netflix spent $100mn on 26 episodes of ‘House of Cards’ as they knew the show would appeal to viewers of original British House of Cards and built in director David Fincher and actor Kevin Spacey. Netflix typically collects behavioral data and it then uses this data to create a better experience for the user.But Netflix uses Big Data for more than that, they monitor and analyze traffic details for various devices, spot problem areas and adjust network infrastructure to prepare for future demand. (later is action out of Big Data analytics, how big data analysis is put to use). They also try to get insights into types of content viewers to prefer and help them make informed decisions.   Apart from Netflix, Spotify is also a known great use case.2. Advertising and Media / Campaigning /EntertainmentFor decades marketers were forced to launch campaigns while blindly relying on gut instinct and hoping for the best. That all changed with digitization and big data world. Nowadays, data-driven campaigns and marketing is on the rise and to be successful in this landscape, a modern marketing campaign must integrate a range of intelligent approaches to identify customers, segment, measure results, analyze data and build upon feedback in real time. All needs to be done in real time, along with the customer’s profile and history, based on his purchasing patterns and other relevant information and Big Data solutions are the perfect fit.Event-driven marketing is also could be achieved through big data, which is another way of successful marketing in today’s world. That basically indicates, keeping track of events customer are directly and indirectly involved with and campaign exactly when a customer would need it rather than random campaigns. For. Ex if you have searched for a product on Amazon/Flipkart, you would see related advertisements on other social media apps you casually browse through. Bang on, you would end up purchasing it as you anyway needed options best to choose from.3. Healthcare IndustryHealthcare is one of the classic use case industries for Big Data applications. The industry generates a huge amount of data.Patients medical history, past records, treatments given, available and latest medicines, Medicinal latest available research the list of raw data is endless.All this data can help give insights and Big Data can contribute to the industry in the following ways:Diagnosis time could be reduced, and exact requirement treatment could be started immediately. Most of the illnesses could be treated if a diagnosis is perfect and treatment can be started in time. This can be achieved through evidence-based past medical data available for similar treatments to doctor treating the illness, patients’ available history and feeding symptoms real-time into the system.  Government Health department can monitor if a bunch of people from geography reporting of similar symptoms, predictive measures could be taken in nearby locations to avoid outbreak as a cause for such illness could be the same.   The list is long, above were few representative examples.4. SecurityDue to social media outbreak, today, personal information is at stake. Almost everything is digital, and majority personal information is available in the public domain and hence privacy and security are major concerns with the rise in social media. Following are few such applications for big data.Cyber Crimes are common nowadays and big data can help to detect, predicting crimes.Threat analysis and detection could be done with big data.  5. Travel and TourismFlight booking sites, IRCTC track the clicks and hits along with IP address, login information, and other details and as per demand can do dynamic pricing for the flights/ trains. Big Data helps in dynamic pricing and mind you it’s real time. Am sure each one of us has experienced this. Now you know who is doing it :DTelecommunications, Public sector, Education, Social media and gaming, Energy and utility every industry have implemented are implementing several of these Big Data use cases day in and day out. If you look around am sure you would find them on the rise.Big Data is helping everyone industries, consumers, clients to make informed decisions, whatever it may be and hence wherever there is such a need, Big Data can come handy.Challenges faced by Big Data in the real world for adaptationAlthough the world is going gaga about big data, there are still a few challenges to implement and adopt Big Data and hence service industries are still striving towards resolving those challenges to implement best Big Data solution without flaws.An October 2016 report from Gartner found that organizations were getting stuck at the pilot stage of their big data initiatives. "Only 15 percent of businesses reported deploying their big data project to production, effectively unchanged from last year (14 per cent)," the firm said.Let’s discuss a few of them to understand what are they?1. Understanding Big Data and answering Why for the organization one is working with.As I started the article saying there are many versions of Big Data and understanding real use cases for organization decision makers are working with is still a challenge. Everyone wants to ride on a wave but not knowing the right path is still a struggle. As every organization is unique thus its utmost important to answer ‘why big data’ for each organization. This remains a major challenge for decision makers to adapt to big data.2. Understanding Data sources for the organizationIn today’s world, there are hundreds and thousands of ways information is being generated and being aware of all these sources and ingest all of them into big data platforms to get accurate insight is essential. Identifying sources is a challenge to address.It's no surprise, then, that the IDG report found, "Managing unstructured data is growing as a challenge – rising from 31 per cent in 2015 to 45 per cent in 2016."Different tools and technologies are on the rise to address this challenge.3. Shortage if Big Data Talent and retaining themBig Data is changing technology and there are a whopping number of tools in the Big Data technology landscape. It is demanded out of Big Data professionals to excel in those current tools and keep up self to ever-changing needs. This gets difficult for employees and employers to create and retain talent within the organization.The solution to this would be constant upskilling, re-skilling and cross-skilling and increasing budget of organization for retaining talent and help them train.4. The Veracity V This V is a challenge as this V means inconsistent, incomplete data processing. To gain insights through big data model, the biggest step is to predict and fill missing information.This is a tricky part as filling missing information can lead to decreasing accuracy of insights/ analytics etc.To address this concern, there is a bunch of tools. Data curation is an important step in big data and should have a proper model. But also, to keep in mind that Big Data is never 100% accurate and one must deal with it.5. SecurityThis aspect is given low priority during the design and build phases of Big Data implementations and security loopholes can cost an organization and hence it’s essential to put security first while designing and developing Big Data solutions. Also, equally important to act responsibly for implementations for regulatory requirements like GDPR.  6. Gaining Valuable InsightsMachine learning data models go through multiple iterations to conclude on insights as they also face issues like missing data and hence the accuracy. To increase accuracy, lots of re-processing is required, which has its own lifecycle. Increasing accuracy of insights is a challenge and which relates to missing data piece. Which most likely can be addressed by addressing missing data challenge.This can also be caused due to unavailability of information from all data sources. Incomplete information would lead to incomplete insights which may not benefit to required potential.Addressing these discussed challenges would help to gain valuable insights through available solutions.With Big Data, the opportunities are endless. Once understood, the world is yours!!!!Also, now that you understand BIG DATA, it's worth understanding the next steps:Gary King, who is a professor at Harvard says “Big data is not about the data. It is about the analytics”You can also take up Big Data and Hadoop training to enhance your skills furthermore.Did the article helps you to understand today’s massive world of big data and getting a sneak peek into it Do let us know through the comment section below?

What is Big Data — An Introductory Guide

27K
What is Big Data — An Introductory Guide

The massive world of Big Data

If one strolls around any IT office premises, over every decade (nowadays time span is even lesser, almost every 3-4 years) one would overhear professionals discussing new jargons from the hottest trends in technology. Around 5 -6 years ago, one such word has started ruling IT services is ‘BIG data’ and still has been interpreted by a layman to tech geeks in various ways.

Although services industries started talking about big data solutions widely from 5-6 years, it is believed that the term was in use since the 1990s by John Mashey from Silicon Graphics, whereas credit for coining the term ‘big data’ aligning to its modern definition goes to Roger Mougalas from O’Reilly Media in 2005.

Let’s first understand why everyone going gaga about ‘BIG data’ and what are the real-world problems it is supposed to solve and then we will try to answer what and how aspects of it.

Why is Big Data essential for today’s digital world?

Pre smart-phones era, internet and web world were around for many years, but smart-phones made it mobile with on-the-go usage. Social Media, mobile apps started generating tons of data. At the same time, smart-bands, wearable devices ( IoT, M2M ), have given newer dimensions for data generation. This newly generated data became a new oil to the world. If this data is stored and analyzed, it has the potential to give tremendous insights which could be put to use in numerous ways.

You will be amazed to see the real-world use cases of BIG data. Every industry has a unique use case and is even unique to every client who is implementing the solutions. Ranging from data-driven personalized campaigning (you do see that item you have browsed on some ‘xyz’ site onto Facebook scrolling, ever wondered how?) to predictive maintenance of huge pipes across countries carrying oils, where manual monitoring is practically impossible. To relate this to our day to day life, every click, every swipe, every share and every like we casually do on social media is helping today’s industries to take future calculated business decisions. How do you think Netflix predicted the success of ‘House of Cards’ and spent $100 million on the same? Big data analytics is the simple answer.

Talking about all this, the biggest challenge in the past was traditional methods used to store, curate and analyze data, which had limitations to process this data generated from newer sources and which were huge in volumes generated from heterogeneous sources and was being generated  really fast(To give you an idea, roughly 2.5 quintillion data is generated per day as on today – Refer infographic released by Domo called “Data Never Sleeps 5.0.” ), Which given rise to term BIG data and related solutions.

Understanding Big Data: Experts’ viewpoint 

BIG data literally means Massive data (loosely > 1TB) but that’s not the only aspect of it. Distributed data or even complex datasets which could not be analyzed through traditional methods can be categorized into ‘Big data’ and hence Big data theoretical definition makes a lot of sense with this background:

“Gartner (2012) defines, Big data is high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”

Generic data possessing characteristics of big data are 3Vs namely Variety, Velocity, and Volume

But due to the changing nature of data in today’s world and to gain most insights of it, 3 more Vs are added to the definition of BIG DATA, namely Variability, Veracity and Value.

The diagram below illustrates each V in detail:

 6 V’s of Big Data

Diagram: 6 V’s of Big Data

This 6Vs help understanding the characteristics of “BIG Data” but let’s also understand types of data in BIG Data processing.  
“Variety” of above characteristics caters to different types of data can be processed through big data tools and technologies. Let’s drill down a bit for understanding what those are:

  1. Structured ex. Mainframes, traditional databases like Teradata, Netezza, Oracle, etc.
  2. Unstructured ex. Tweets, Facebook posts, emails, etc.
  3. Semi/Multi structured or Hybrid ex. E-commerce, demographic, weather data, etc.

As the technology is advancing, the variety of data is available and its storage, processing, and analysis are made possible by big data. Traditional data processing techniques were able to process only structured data.

Now, that we understand what big data and limitations of old traditional techniques are of handling such data, we could safely say, we need new technology to handle this data and gain insights out of it. Before going further, do you know, what were the traditional data management techniques?

Traditional Techniques of Data Processing are:

  1. RDBMS (Relational Database Management System)
  2. Data warehousing and DataMart

On a high level, RDBMS catered to OLTP needs and data warehousing/DataMart facilitated OLAP needs. But both the systems work with structured data.

I hope. now one can answer, ‘what is big data?’ conceptually and theoretically both.

So, it’s time that we understand how it is being done in actual implementations.

only storing of “big data” will not help the organizations, what’s important is to turn data into insights and business value and to do so, following are the key infrastructure elements:

  • Data collection
  • Data storage
  • Data analysis and
  • Data visualization/output

All major big data processing framework offerings are based on these building blocks.

Traditional Techniques of Data Processing

And in an alignment of the above building blocks, following are the top 5 big data processing frameworks that are currently being used in the market:

1. Apache Hadoop : Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.First up is the all-time classic, and one of the top frameworks in use today. So prevalent is it, that it has almost become synonymous with Big Data.

2 Apache Spark : unified analytics engine for large-scale data processing.

Apache Spark and Hadoop are often contrasted as an "either/or" choice,  but that isn't really the case.

Above two frameworks are popular but apart from that following 3 are available and are comparable frameworks:

3. Apache Storm : free and open source distributed real-time computation system. You can also take up Apache Storm training to learn more about Apache Storm.

4. Apache Flink : streaming dataflow engine, aiming to provide facilities for distributed computation over streams of data. Treating batch processes as a special case of streaming data, Flink is effectively both batch and real-time processing framework, but one which clearly puts streaming first.

5. Apache Samza : distributed Stream processing framework.

Frameworks help processing data through building blocks and generate required insights. The framework is supported by the whopping number of tools providing the required functionality.

Big Data processing frameworks and technology landscape

Big data tools and technology landscape can be better understood with layered big data architecture. Give a good read to a great article by Navdeep singh Gill on XENONSTACK for understanding the layered architecture of big data.

By taking inspiration from layered architecture, different available tools in the market are mapped to layers to understand big data technology landscape in depth. Note that, layered architecture fits very well with infrastructure elements/building blocks discussed in the above section.

 Framework and technology landscape

Few of the tools are briefed below for further understanding:  

1. Data Collection / Ingestion Layer 

  • Cassandra: is a free and open-source, distributed, wide column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure
  • Kafka: is used for building real-time data pipelines and streaming apps. Event streaming platform
  • Flume: log collector in Hadoop
  • HBase: columnar database in Hadoop

2. Processing Layer 

  • Pig: scripting language in the Hadoop framework
  • MapReduce: processing language in Hadoop

3. Data Query Layer 

  • Impala: Cloudera Impala:  modern, open source, distributed SQL query engine for Apache Hadoop. (often compared with hive)
  • Hive: Data Warehouse software for data Query and analysis
  • Presto: Presto is a high performance, distributed SQL query engine for big data. Its architecture allows users to query a variety of data sources such as Hadoop, AWS S3, Alluxio, MySQL, Cassandra, Apache Kafka, and MongoDB

4. Analytical Engine

  • TensorFlow: n source machine learning library for research and production.

5. Data storage Layer

  • Ignite: open-source distributed database, caching and processing platform designed to store and compute on large volumes of data across a cluster of nodes
  • Phoenix: hortonworks: Apache Phoenix is an open source, massively parallel, relational database engine supporting OLTP for Hadoop using Apache HBase as its backing store
  • PolyBase: s a new feature in SQL Server 2016. It is used to query relational and non-relational databases (NoSQL). You can use PolyBase to query tables and files in Hadoop or in Azure Blob Storage. You can also import or export data to/from Hadoop.
  • Sqoop: ETL tool
  • Big data in EXCEL: Few people like to process big datasets with current excel capabilities and it's known as Big Data in Excel

6. Data Visualization Layer

  • Microsoft HDInsight: Azure HDInsight is a Hadoop service offering hosted in Azure that enables clusters of managed Hadoop instances. Azure HDInsight deploys and provisions Apache Hadoop clusters in the cloud, providing a software framework designed to manage, analyze, and report on big data with high reliability and availability. Hadoop administration training will give you all the technical understanding required to manage a Hadoop cluster, either in a development or a production environment.

Best Practices in Big Data  

Every organization, industry, business, may it be small or big wants to get benefit out of “big data” but it's essential to understand that it can prove of maximum potential only if organization adhere to best practices before adapting big data:

Answering 5 basic questions help clients know the need for adapting Big Data for organization

  1. Try to answer why Big Data is required for the organization. What problem would it help solve?
  2. Ask the right questions.
  3. Foster collaboration between business and technology teams.
  4. Analyze only what is required to use.
  5. Start small and grow incrementally.

Big Data industry use-cases 

We talked about all the things in the Big Data world except real use cases of big data. In the starting, we did discuss few but let me give you insights into the real world and interesting big data use cases and for a few, it’s no longer a secret ☺. In fact, it’s penetrating to the extent you name the industry and plenty of use cases can be told. Let’s begin.

1. Streaming Platforms

As I had given an example of ‘House of Cards’ at the start of the article, it’s not a secret that Netflix uses Big Data analytics. Netflix spent $100mn on 26 episodes of ‘House of Cards’ as they knew the show would appeal to viewers of original British House of Cards and built in director David Fincher and actor Kevin Spacey. Netflix typically collects behavioral data and it then uses this data to create a better experience for the user.

But Netflix uses Big Data for more than that, they monitor and analyze traffic details for various devices, spot problem areas and adjust network infrastructure to prepare for future demand. (later is action out of Big Data analytics, how big data analysis is put to use). They also try to get insights into types of content viewers to prefer and help them make informed decisions.   

Streaming Platforms

Apart from Netflix, Spotify is also a known great use case.

2. Advertising and Media / Campaigning /Entertainment

For decades marketers were forced to launch campaigns while blindly relying on gut instinct and hoping for the best. That all changed with digitization and big data world. Nowadays, data-driven campaigns and marketing is on the rise and to be successful in this landscape, a modern marketing campaign must integrate a range of intelligent approaches to identify customers, segment, measure results, analyze data and build upon feedback in real time. All needs to be done in real time, along with the customer’s profile and history, based on his purchasing patterns and other relevant information and Big Data solutions are the perfect fit.

Event-driven marketing is also could be achieved through big data, which is another way of successful marketing in today’s world. That basically indicates, keeping track of events customer are directly and indirectly involved with and campaign exactly when a customer would need it rather than random campaigns. For. Ex if you have searched for a product on Amazon/Flipkart, you would see related advertisements on other social media apps you casually browse through. Bang on, you would end up purchasing it as you anyway needed options best to choose from.

Advertising and Media

3. Healthcare Industry

Healthcare is one of the classic use case industries for Big Data applications. The industry generates a huge amount of data.

Patients medical history, past records, treatments given, available and latest medicines, Medicinal latest available research the list of raw data is endless.

All this data can help give insights and Big Data can contribute to the industry in the following ways:

  1. Diagnosis time could be reduced, and exact requirement treatment could be started immediately. Most of the illnesses could be treated if a diagnosis is perfect and treatment can be started in time. This can be achieved through evidence-based past medical data available for similar treatments to doctor treating the illness, patients’ available history and feeding symptoms real-time into the system.  
  2. Government Health department can monitor if a bunch of people from geography reporting of similar symptoms, predictive measures could be taken in nearby locations to avoid outbreak as a cause for such illness could be the same.   

The list is long, above were few representative examples.

4. Security

Due to social media outbreak, today, personal information is at stake. Almost everything is digital, and majority personal information is available in the public domain and hence privacy and security are major concerns with the rise in social media. Following are few such applications for big data.

  1. Cyber Crimes are common nowadays and big data can help to detect, predicting crimes.
  2. Threat analysis and detection could be done with big data.  

5. Travel and Tourism

Flight booking sites, IRCTC track the clicks and hits along with IP address, login information, and other details and as per demand can do dynamic pricing for the flights/ trains. Big Data helps in dynamic pricing and mind you it’s real time. Am sure each one of us has experienced this. Now you know who is doing it :D

Telecommunications, Public sector, Education, Social media and gaming, Energy and utility every industry have implemented are implementing several of these Big Data use cases day in and day out. If you look around am sure you would find them on the rise.

Big Data is helping everyone industries, consumers, clients to make informed decisions, whatever it may be and hence wherever there is such a need, Big Data can come handy.

Challenges faced by Big Data in the real world for adaptation

Challenges faced by Big Data in the real world for adaptation

Although the world is going gaga about big data, there are still a few challenges to implement and adopt Big Data and hence service industries are still striving towards resolving those challenges to implement best Big Data solution without flaws.

An October 2016 report from Gartner found that organizations were getting stuck at the pilot stage of their big data initiatives. "Only 15 percent of businesses reported deploying their big data project to production, effectively unchanged from last year (14 per cent)," the firm said.

Let’s discuss a few of them to understand what are they?

1. Understanding Big Data and answering Why for the organization one is working with.

As I started the article saying there are many versions of Big Data and understanding real use cases for organization decision makers are working with is still a challenge. Everyone wants to ride on a wave but not knowing the right path is still a struggle. As every organization is unique thus its utmost important to answer ‘why big data’ for each organization. This remains a major challenge for decision makers to adapt to big data.

2. Understanding Data sources for the organization

In today’s world, there are hundreds and thousands of ways information is being generated and being aware of all these sources and ingest all of them into big data platforms to get accurate insight is essential. Identifying sources is a challenge to address.

It's no surprise, then, that the IDG report found, "Managing unstructured data is growing as a challenge – rising from 31 per cent in 2015 to 45 per cent in 2016."

Different tools and technologies are on the rise to address this challenge.

3. Shortage if Big Data Talent and retaining them

Big Data is changing technology and there are a whopping number of tools in the Big Data technology landscape. It is demanded out of Big Data professionals to excel in those current tools and keep up self to ever-changing needs. This gets difficult for employees and employers to create and retain talent within the organization.

The solution to this would be constant upskilling, re-skilling and cross-skilling and increasing budget of organization for retaining talent and help them train.

4. The Veracity V 

This V is a challenge as this V means inconsistent, incomplete data processing. To gain insights through big data model, the biggest step is to predict and fill missing information.

This is a tricky part as filling missing information can lead to decreasing accuracy of insights/ analytics etc.

To address this concern, there is a bunch of tools. Data curation is an important step in big data and should have a proper model. But also, to keep in mind that Big Data is never 100% accurate and one must deal with it.

5. Security

This aspect is given low priority during the design and build phases of Big Data implementations and security loopholes can cost an organization and hence it’s essential to put security first while designing and developing Big Data solutions. Also, equally important to act responsibly for implementations for regulatory requirements like GDPR.  

6. Gaining Valuable Insights

Machine learning data models go through multiple iterations to conclude on insights as they also face issues like missing data and hence the accuracy. To increase accuracy, lots of re-processing is required, which has its own lifecycle. Increasing accuracy of insights is a challenge and which relates to missing data piece. Which most likely can be addressed by addressing missing data challenge.

This can also be caused due to unavailability of information from all data sources. Incomplete information would lead to incomplete insights which may not benefit to required potential.

Addressing these discussed challenges would help to gain valuable insights through available solutions.

With Big Data, the opportunities are endless. Once understood, the world is yours!!!!

Also, now that you understand BIG DATA, it's worth understanding the next steps:

Gary King, who is a professor at Harvard says “Big data is not about the data. It is about the analytics”

You can also take up Big Data and Hadoop training to enhance your skills furthermore.

Did the article helps you to understand today’s massive world of big data and getting a sneak peek into it Do let us know through the comment section below?

Shruti

Shruti Deshpande

Blog Author

10+ years of data-rich experience in the IT industry. It started with data warehousing technologies into data modelling to BI application Architect and solution architect.


Big Data enthusiast and data analytics is my personal interest. I do believe it has endless opportunities and potential to make the world a sustainable place. Happy to ride on this tide.


*Disclaimer* - Expressed views are the personal views of the author and are not to be mistaken for the employer or any other organization’s views.

Join the Discussion

Your email address will not be published. Required fields are marked *

2 comments

shivkumar 01 Jun 2019 1 likes

Thanks for sharing this amazing blog.It is really an informative post.

Nisha 18 Jun 2019 1 likes

The article looks good and the way of presentation is nice.

Suggested Blogs

Overview of Deploying Machine Learning Models

Machine Learning is no longer just the latest buzzword. In fact, it has permeated every facet of our everyday lives. Most of the applications across the world are built using Machine Learning and their applications extend further when they are combined with other cutting-edge technologies like Deep Learning and Artificial Intelligence. These latest technologies are a boon to mankind, as they simplify tasks, helping to complete work in lesser time. They boost the growth and profitability of industries and organizations across sectors, which in turn helps in the growth of the economy and generates jobs.What are the fields that machine learning extends into?Machine Learning now finds applications across sectors and industries including fields like Healthcare, defense, insurance, government sectors, automobile, manufacturing, retail and more. ML gives great insights to businesses in gaining and retaining customer loyalty, enhances efficiency, minimizes the time consumption, optimizes resource allocation and decreases the cost of labor for a specific task.What is Model Deployment?It’s well established that ML has a lot of applications in the real world. But how exactly do these models work to solve our problems? And how can it be made available for a large user base? The answer is that we have to deploy the trained machine learning model into the web, so that it can be available for many users.When a model is deployed, it is fully equipped with training and it knows what are the inputs to be taken by the model and what are the outputs given out in return. This strategy is used to advantage in real world applications. Deployment is a tricky task and is the last stage of our ML project.Generally, the deployment will take place on a web server or a cloud for further use, and we can modify the content based on the user requirements and update the model. Deployment makes it easier to interact with the applications and share the benefits to the applications with others.With the process of Model Deployment, we can overcome problems like Portability, which means shifting of software from one machine to the other and Scalability, which is the capacity to be changed on a scale and the ability of the computation process to be used in a wide range of capabilities.Installing Flask on your MachineFlask is a web application framework in Python. It is a lightweight Web Server Gateway Interface (WSGI) web framework. It consists of many modules, and contains different types of tools and libraries which helps a web developer to write and implement many useful web applications.Installing Flask on our machine is simple. But before that, please ensure you have installed Python in your system because Flask runs using Python.In Windows: Open command prompt and write the following code:a) Initially make the virtual environment using pip -- pip install virtualenv And then write mkvirtualenv HelloWorldb) Connect to the project – Create a folder dev, then mkdir Helloworld for creating a directory; then type in cd HelloWorld to go the file location.c) Set Project Directory – Use setprojectdir in order to connect our virtual environment to the current working directory. Now further when we activate the environment, we will directly move into this directory.d) Deactivate – On using the command called deactivate, the virtual environment of hello world present in parenthesis will disappear, and we can activate our process directly in later steps.e) Workon – When we have some work to do with the project, we write the command  “workon HelloWorld” to activate the virtual environment directly in the command prompt.The above is the set of Virtual Environment commands for running our programs in Flask. This virtual environment helps and makes the work easier as it doesn’t disturb the normal environment of the system. The actions we perform will reside in the created virtual environment and facilitate the users with better features.f) Flask Installation – Now you install flask on the virtual environment using command pip install flaskUnderstanding the Problem StatementFor example, let us try a Face Recognition problem using opencv. Here, we work on haarcascades dataset. Our goal is to detect the eyes and face using opencv. We have an xml file that contains the values of face and eyes that were stored. This xml file will help us to identify the face and eyes when we look into the camera.The xml data for face recognition is available online, and we can try this project on our own after reading this blog. For every problem that we solve using Machine Learning, we require a dataset, which is the basic building block for the Model development in ML. You can generate interesting outcomes at the end like detecting the face and eyes with a bounding rectangular box. Machine learning beginners can use these examples and create a mini project which will help them to know much about the core of ML and other technologies associated with it.Workflow of the ProjectModel Building: We build a Machine Learning model to detect the face of the human present in front of the camera. We use the technology of Opencv to perform this action which is the library of Computer Vision.Here our focus is to understand how the model is working and how it is deployed on server using Flask. Accuracy is not the main objective, but we will learn how the developed ML model is deployed.Face app: We will create a face app that detects your face and implements the model application. This establishes the connection between Python script and the webpage template.Camera.py: This is the Python script file where we import the necessary libraries and datasets required for our model and we write the actual logic for the model to exhibit its functionality.Webpage Template: Here, we will design a user interface where the user can experience live detection of his face and eyes in the camera. We provide a button on a webpage, to experience the results.Getting the output screen: when the user clicks the button, the camera will open directly and we can get the result of the machine learning model deployed on the server. In the output screen you can see your face. Storage: This section is totally optional for users, and it is based on the users’ choice of storing and maintaining the data. After getting the outputs on the webpage screen, you can store the outputs in a folder on your computer. This helps us to see how the images are captured and stored locally in our system. You can add a file path in the code, that can store the images locally on your system if necessary.This application can be further extended to a major project of “Attendance taking using Face Recognition Technique”, which can be used in colleges and schools, and can potentially replace normal handwritten Attendance logs. This is an example of a smart application that can be used to make our work simple.Diagrammatic Representation of the steps for the projectBuilding our Machine Learning ModelWe have the XML data for recognizing face and eyes respectively. Now we will write the machine learning code, that implements the technique of face and eyes detection using opencv. Before that, we import some necessary libraries required for our project, in the file named camera.py # import cv2 # import numpy as np # import scipy.ndimage # import pyzbar.pyzbar as pyzbar # from PIL import Image Now, we load the dataset into some variables in order to access them further. Haarcascades is the file name where all the xml files containing the values of face, eye, nose etc are stored. # defining face detector# face_cascade = cv2.CascadeClassifier("haarcascades/haarcascade_frontalface_default.xml") # eye_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_eye.xml')The xml data required for our project is represented as shown below, and mostly consists of numbers.Now we write the code for opening the camera, and releasing of camera in a class file. The “def” keyword is the name of the function in Python. The functions in Python are declared using the keyword “def”.The function named “def __init__” initiates the task of opening camera for live streaming of the video. The “def __del__” function closes the camera upon termination of the window.# class VideoCamera(object):#    def __init__(self):        # capturing video#       self.video = cv2.VideoCapture(0) #  def __del__(self):#        # releasing camera#        self.video.release()Next, we build up the actual logic for face and eyes detect using opencv in Python script as follows. This function is a part of class named videocamera.# class VideoCamera(object):#    def __init__(self):#        # capturing video#        self.video = cv2.VideoCapture(0)#    def __del__(self):#        # releasing camera#        self.video.release()#    def face_eyes_detect(self):#        ret, frame = self.video.read()#        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)#        faces = face_cascade.detectMultiScale(gray, 1.3, 5)#        c=0#        for (x,y,w,h) in faces:#            cv2.rectangle(frame, (x,y), (x+w,y+h), (255, 0, 0), 2)#            roi_gray = gray[y:y+h, x:x+w]#            roi_color = frame[y:y+h, x:x+w]#            eyes = eye_cascade.detectMultiScale(roi_gray)#            for (ex,ey,ew,eh) in eyes:#                cv2.rectangle(roi_color, (ex, ey), (ex+ew, ey+eh), (0, 255, 0), 2)#            while True:#                k = cv2.waitKey(1000) & 0xFF#                print("Image "+str(c)+" saved")#                file = 'C:/Users/user/dev/HelloWorld/images/'+str(c)+'.jpg'#                cv2.imwrite(file, frame)#                c += 1            # encode Opencv raw frame to jpg and display it#        ret, jpeg = cv2.imencode('.jpg', frame)#        return jpeg.tobytes()The first line in the function “ret, frame” reads the data of live streaming video. The ret takes the value “1”, when the camera is open, else it takes “0” as input. The frame captures the live streaming video from time to time. In the 2nd line, we are changing the color of image from RGB to Grayscale, i.e., we are changing the values of pixels. And then we are applying some inbuilt functions to detect faces. The for loop, illustrates that it is having some fixed dimensions to draw a bounding rectangular box around the face and eyes, when it is detected. If you want to store the captured images after detecting face and eyes, we can add the code of while loop, and we can give the location to store the captured images. When an image is captured, it is saved as Image 1, Image 2 saved, etc., for confirmation.All the images will be saved in jpg format. We can mention the type of format in which the images should be stored. The method named cv2.imwrite stores the frame in a particular file location.Finally, after capturing the detected picture of face and eyes, it displays the result at the user end. Creating a WebpageWe will create a webpage, in order to implement the functionality of the developed machine learning model after deployment using Flask. Here is the design of our webpage.The above picture represents a small webpage demonstrating “Video Streaming Demonstration” and a link “face-eyes-detect”. When we click the button on the screen, the camera gets opened and the functionality will be displayed to the users who are facing the camera.The code for creating a webpage is as follows:If the project contains only one single html file, it should be necessarily saved with the name of index. The above code should be saved as “index.html” in a folder named “templates” in the project folder named “HelloWorld”, that we have created in the virtual environment earlier. This is the actual format we need to follow while designing a webpage using Flask framework.Connecting Webpage to our ModelTill now we have developed two separate files, one for developing the machine learning model for the problem statement and the other for creating a webpage, where we can access the functionality of the model. Now we will try to see how we can connect both of them.This is the Python script with the file name saved as “app.py”. Initially we import the necessary libraries to it, and create a variable that stores the Flask app. We then guide the code to which location it needs to be redirected, when the Python scripts are executed in our system. The redirection is done through “@app.route” followed by a function named “home”. Then we include the functionality of model named “face_eyes_detect” to the camera followed by the function definition named “gen”. After adding the functionality, we display the response of the deployed model on to the web browser. The outcome of the functionality is the detection of face and eyes in the live streaming camera and the frames are stored in the folder named images. We put the debug mode to False. # from flask import Flask, render_template, Response,url_for, redirect, request.# from flask import Flask, render_template, Response,url_for, redirect, request  # from camera import VideoCamera  # import cv2  # import time  # app = Flask(__name__)  # @app.route("/")  # def home():  #     # rendering web page  #     return render_template('index.html')  # def gen(camera):  #     while True:  #         # get camera frame  #         frame = camera.face_eyes_detect()  #         yield(b'--frame\r\n'  #                   b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')  # @app.route("/video_feed")  # def video_feed():  #     return Response(gen(VideoCamera()),  #           mimetype='multipart/x-mixed-replace; boundary=frame')  # if __name__ == '__main__':  #     # defining server ip address and port  #     app.run(debug=False)Before running the Python scripts, we need to install the libraries like opencv, flask, scipy, numpy, PIL, pyzbar etc., using the command prompt with the command named “pip install library_name” like “pip install opencv-python”, ”pip install flask”, “pip install scipy” etc.When you have installed all the libraries in your system, now open the python script “app.py” and run it using the command “f5”. The output is as follows:Image: Output obtained when we run app.py fileNow we need to copy the server address http://127.0.0.1:5000/ and paste it on the web browser, and we will get the output screen as follows:Now when we click the link “face-eyes-detect”, we will get the functionality of detecting the face and eyes of a user, and it is seen as follows:Without SpectaclesWith SpectaclesOne eye closed by handone eye closedWhen these detected frames are generated, they are similarly stored in a specified location of folder named “images”. And in the Python shell we can observe, the sequence of images is saved in the folder, and looks as follows:In the above format, we get the outcomes of images stored in our folder.Now we will see how the images were stored in the previously created folder named “images” present in the project folder of “HelloWorld.”Now we can use the deployed model in real time. With the help of this application, we can try some other new applications of Opencv and we can deploy it in the flask server accordingly.  You can find all the above code with the files in the following github repository, and you can make further changes to extend this project application to some other level.Github Link.ConclusionIn this blog, we learnt how to deploy a model using flask server and how to connect the Machine Learning Model with the Webpage using Flask. The example project of face-eyes detection using opencv is a pretty common application in the present world. Deployment using flask is easy and simple.  We can use the Flask Framework for deployment of ML models as it is a light weight framework. In the real-world scenario, Flask may not be the most suitable framework for bigger applications as it is a minimalist framework and works well only for lighter applications.
3290
Overview of Deploying Machine Learning Models

Machine Learning is no longer just the latest buzz... Read More

How Big Data Can Solve Enterprise Problems

Many professionals in the digital world have become familiar with the hype cycle. A new technology enters the tech world amid great expectations. Undoubtedly, dismay sets in and retrenchment stage starts, practice and process catch up to assumptions and the new value is untied. Currently, there is apparently no topic more hyped than big data and there is already no deficit of self-proclaimed pundits. Yet nearly 55% of big data projects fail and there is an increasing divide between enterprises that are benefiting from its use and those who are not. However, qualified data scientists, great integration across departments, and the ability to manage expectations all play a part in making big data work for your organization. It is often said that an organization’s future is dependent on the decisions it takes. Since most of the business decisions are backed by data available at hand. The accurate the information, the better they are for the business. Gone are the days when data was only used as an aid in better decision making. But now, with big data, it has actually become a part of all business decisions. For quite some time now, big data has been changing the way business operations are managed, how they collect data and turn it into useful and accurate information in real-time. Today, let’s take a look at solving real-life enterprise problems with big data. Predictive Analysis Let’s assume that you have a solid knowledge of the emerging trends and technologies in the market or when your infrastructure needs good maintenance. With huge amounts of data, you can easily predict trends and your future needs for the business. This sort of knowledge gives you an edge over your peers in this competitive world. Enhancing Market Research Regardless of the business vertical, market research is an essential part of business operations. With the ever-changing needs and aspirations of your customers, businesses need to find ways to get into the mind of customers with better and improved products and services. In such scenarios, having large volumes of data in hand will let you carry out detailed market research and thus enhancing your products and services. Streamlining Business Process For any enterprise, streamlining the business process is a crucial link to keeping the business sustainable and lucrative. Some effective modifications here and there can benefit you in the long run by cutting down the operational costs. Big data can be utilized to overhaul your whole business process right from raw material procurement to maintaining the supply chain. Data Access Centralization It is an inevitable fact that the decentralized data has its own advantages and one of the main restrictions arises from the fact that it can build data silos. Large enterprises with global presence frequently encounter such challenges. Centralizing conventional data often posed a challenge and blocked the complete enterprise from working as one team. But big data has entirely solved this problem, offering visibility of the data throughout the organization. How are you navigating the implications of all that data within your enterprise? Have you deployed big data in your enterprise and solved real-life enterprise problems? Then we would love to know your experiences. Do let us by commenting in the section below.
15000
How Big Data Can Solve Enterprise Problems

Many professionals in the digital world have becom... Read More

Analysis Of Big Data Using Spark And Scala

The use of Big Data over a network cluster has become a major application in multiple industries. The wide use of MapReduce and Hadoop technologies is proof of this evolving technology, along with the recent rise of Apache Spark, a data processing engine written in Scala programming language. Introduction to Scala Scala is a general purpose object-oriented programming language, similar to Java programming. Scala is an acronym for “Scalable language” meaning its capabilities can grow along the lines of your requirements & also there are more technologies built on scala. The capabilities of Scala programming can range from a simple scripting language to the preferred language for mission-critical applications. Scala has the following capabilities: Support for functional programming, with features including currying, type interference, immutability, lazy evaluation, and pattern matching. An advanced type system including algebraic data types and anonymous types. Features that are not available in Java, like operator overloading, named parameters, raw strings, and no checked exceptions. Scala can run seamlessly on a Java Virtual Machine (JVM), and Scala and Java classes can be freely interchanged or can refer to each other. Scala also supports cluster computing, with the most popular framework solution, Spark, which was written using Scala. Introduction to Apache Spark Apache Spark is an open-source Big Data processing framework that provides an interface for programming data clusters using data parallelism and fault tolerance. Apache Spark is widely used for fast processing of large datasets. Apache Spark is an open-source platform, built by a wide set of software developers from over 200 companies. Since 2009, more than 1000 developers have contributed to Apache Spark. Apache Spark provides better capabilities for Big Data applications, as compared to other Big Data technologies such as Hadoop or MapReduce. Listed below are some features of Apache Spark: 1. Comprehensive framework Spark provides a comprehensive and unified framework to manage Big Data processing, and supports a diverse range of data sets including text data, graphical data, batch data, and real-time streaming data. 2. Speed Spark can run programs up to 100 times faster than Hadoop clusters in memory, and 10 times faster when running on disk. Spark has an advanced DAG (directed acrylic graph) execution engine that provides support for cyclic data flow and in-memory data sharing across DAGs to execute different jobs with the same data. 3. Easy to use With a built-in set of over 80 high-level operators, Spark allows programmers to write Java, Scala, or Python applications in quick time. 4. Enhanced support In addition to Map and Reduce operations, Spark provides support for SQL queries, streaming data, machine learning, and graphic data processing. 5. Can be run on any platform Apache Spark applications can be run on a standalone cluster mode or in the cloud. Spark provides access to diverse data structures including HDFS, Cassandra, HBase, Hive, Tachyon, and any Hadoop data source. Spark can be deployed as a standalone server or on a distributed framework such as Mesos or YARN. 6. Flexibility In addition to Scala programming language, programmers can use Java, Python, Clojure, and R to build applications using Spark. Comprehensive library support As a Spark programmer, you can combine additional libraries within the same application, and provide Big Data analytical and Machine learning capabilities. The supported libraries include: Spark Streaming, used for processing of real-time streaming data. Spark SQL, used for exposing Spark datasets over JDBC APIs and for executing SQL-like queries on Spark datasets. Spark MLib, which is the machine learning library, consisting of common algorithms and utilities. Spark GraphX, which is the Spark API for graphs and graphical computation . BlinkDB, a query engine library used for running interactive SQL queries on large data volumes. Tachyon, which is a memory-centric distributed file system to enable file sharing across cluster frameworks. Spark Cassandra Connector and Spark R, which are integration adapters. With Cassandra Connector, Spark can access data from the Cassandra database and perform data analytics. Compatibility with Hadoop and MapReduce Apache Spark can be much faster as compared to other Big Data technologies. Apache Spark can run on an existing Hadoop Distributed File System (HDFS) to provide compatibility along with enhanced functionality. It is easy to deploy Spark applications on existing Hadoop v1 and v2 cluster. Spark uses the HDFS for data storage, and can work with Hadoop-compatible data sources including HBase and Cassandra. Apache Spark is compatible with MapReduce and enhances its capabilities with features such as in-memory data storage and real-time processing. Conclusion The standard API set of Apache Spark framework makes it the right choice for Big Data processing and data analytics. For client installation setups of MapReduce implementation with Hadoop, Spark and MapReduce can be used together for better results. Apache Spark is the right alternative to MapReduce for installations that involve large amounts of data that require low latency processing
26755
Analysis Of Big Data Using Spark And Scala

The use of Big Data over a network cluster has bec... Read More

Useful links