Search

Machine learning Filter

What is Machine Learning and Why It Matters: Everything You Need to Know

If you are a machine learning enthusiast and stay in touch with the latest developments, you would have definitely come across the news “Machine learning identifies links between the world's oceans”. Wait, we all know how complex it would be to analyse a concept such as oceans and their behaviour which would undoubtedly involve billions of data points associated with many critical parameters such as wind velocities, temperatures, earth’s rotation and many such. Doesn’t this piece of information gives you a glimpse of the wondrous possibilities of machine learning and its potential uses? And this is just a drop in the ocean!As you move across this post, you would get a comprehensive idea of various aspects that you ought to know about machine learning.What is Machine Learning and Why It Matters?Machine learning is a segment of artificial intelligence. It is designed to make computers learn by themselves and perform operations without human intervention, when they are exposed to new data. It means a computer or a system designed with machine learning will identify, analyse and change accordingly and give the expected output when it comes across a new pattern of data, without any need of humans.The power behind machine learning’s self-identification and analysis of new patterns, lies in the complex and powerful ‘pattern recognition’ algorithms that guide them in where to look for what. Thus, the demand for machine learning programmers who have extensive knowledge on working with complex mathematical calculations and applying them to big data and AI is growing year after year.Machine learning, though a buzz word only since recent times, has conceptually been in existence since World War II when Alan Turing’s Bombe, an enigma deciphering machine was introduced to the world. However, it's only in the past decade or so that there has been such great progress made in context to machine learning and its uses, driven mainly by our quest for making this world more futuristic  with lesser human intervention and more precision. Pharma, education technology, industries, science and space, digital inventions, maps and navigation, robotics – you name the domain and you will have instances of machine learning innovations made in it.The Timeline of Machine Learning and the Evolution of MachinesVoice activated home appliances, self-driven cars and online marketing campaigns are some of the applications of machine learning that we experience and enjoy the benefit of in our day to day life. However, the development of such amazing inventions date back to decades. Many great mathematicians and futuristic thinkers were involved in the foundation and development of machine learning.A glimpse of the timeline of machine learning reveals many hidden facts and the efforts of great mathematicians and scientists to whom we should attribute all the fruits that we are enjoying today.1812- 1913: The century that laid the foundation of machine learningThis age laid the mathematical foundation for the development of machine learning. Bayes’ theorem and Markovs Chains took birth during this period.Late 1940s: First computers Computers were recognised as machines that can ‘store data’. The famous Manchester Small-Scale Experimental Machine (nicknamed 'The Manchester Baby') belongs to this era.1950: The official Birth of Machine LearningDespite many researches and theoretical studies done prior to this year, it was the year 1950 that is always remembered as the foundation of the machine learning that we are witnessing today. Alan Turing, researcher, mathematician, computer genius and thinker, submitted a paper where he mentioned something called ‘imitation game’ and astonished the world by questioning “Can Machines Think?”. His research grabbed the attention of the BBC which took an exclusive interview with Alan.1951: The First neural networkThe first artificial neural network was built by Marvin Minsky and Dean Edmonds this year. Today, we all know that artificial neural networks play a key role in the thinking process of computers and machines. This should be attributed to the invention made by these two scientists.1974: Coining of the term ‘Machine Learning’Though there were no specific terms till then for the things that machines did by thinking on their own, it was in 1974 that the term ‘machine learning’ was termed. Other words such as artificial intelligence, informatics and computational intelligence were also proposed the same year.1996: Machine beats man in a game of chessIBM developed its own computer called Deep Blue, that can think. This machine beat the world famous champion in chess, Garry Kasparov. It was then proved to the world that machines can really think like humans.2006-2017: Backpropagation, external memory access and AlphaGoBack propagation is an important technique that machines use for image recognition. This technique was developed in this period of time.Besides in 2014, a neural network developed by DeepMind, a British based company, developed a neural network that can access external memory and get things done.In 2016, AlphaGo was designed by DeepMind researchers. It beat the world famous Go players Lee Sedol and Ke Jie and proved that machines have come a long way.What’s next?Scientists are talking about ‘singularity’ –a phenomenon that would occur if humans develop a humanoid robot that could think better than humans and will recreate itself. So far, we have been witnessing how AI is entering our personal lives too in the form of voice activated devices, smart systems and many more. The results of this singularity – we shall have to wait and watch!Basics of Machine LearningTo put it simply, machine learning involves learning by machines. It means computers learn and there are many concepts, methods, algorithms and processes involved in making this happen. Let us try to understand some of the more important machine learning terms.Three concepts – artificial intelligence, machine learning and deep learning – are often thought to be synonymous. Though they belong to the same family, conceptually they are different.Machine LearningIt implies that machines can ‘learn on their own’ and give the output without any need of programming explicitly.Artificial IntelligenceThis term means machines can ‘think on their own’ just like humans and take decisions.Deep LearningThis involves creation of artificial neural networks which can think and act based on algorithms.How do machines learn?Quite simply, machines learn just like humans do. Humans learn from their training, experiences and through teachers. Sometimes they use knowledge that is fed into their brains, or sometimes take decisions by analysing the current situation using their past experiences.Similarly, machines learn from the inputs given to them which tell them which is right and which is wrong. Then they are given data that they would have to analyse based on the training they have received so far. In some other cases, they do not have any idea of which is right or wrong, but just take the decision based on their own experiences. We will analyse the various concepts of learning and the methods involved.How Machine Learning Works?The process of machine learning occurs in five steps as shown in the following diagram.The steps are explained in simple words below:Gathering the data includes data collection from varied, rich and dense content of various formats and types. In real time, this includes feeding the data from different sources such as text files, word documents or excel sheets.Data preparation involves extracting the actual data out of the entire content fed. Only the data that really makes sense to the machine is used for processing. This step also involves checking for missing data, unwanted data and treatment of outliers.Training involves using an appropriate algorithm and modelling the data. The data filtered in the second step is split into two parts and a part of it is used as training data and the second part is used as reference data. The training data is used to create the model.Evaluating the model includes testing its accuracy. To verify its accuracy to the fullest, the model so developed is tested on such data which is not present in the data during the second step.Finally, the performance of the machine is improved by choosing a different model that suits the different type of data that is present altogether. This is the step where the machine thinks and rethinks in selecting the model best suited for various types of data.Examples of Machine LearningThe below examples will help you understand where machine learning is used in real time:Speech RecognitionVoice based searching and call rerouting are the best examples for speech recognition using machine learning. The principle lies in translating  spoken words into text and then segmenting them on the basis of their frequencies.Image RecognitionWe all use this in day to day life in sorting our pictures on our Google drive or Photos. The main technique that is used here is classifying the pictures based on the intensity (in case of black and white pictures) and measurement of intensities of red, blue and green for coloured images.HealthcareVarious diagnoses are increasingly made using machine learning these days. Here, various clinical parameters are input to the machine which makes a prognosis  and then predicts the disease status and other health parameters of the person under study.Financial ServicesMachine learning helps in predicting chances of financial fraud, customer’s credit habits, spending patterns etc. The financial and banking sector is also doing market analysis using machine learning techniques.Machine Learning – MethodsMachine learning is all about machines learning through the inputs provided. This learning is carried out in the following ways:Supervised LearningAs the name says, the machine learns under supervision. Let’s see how this is done:The entire process of learning takes place in the presence or supervision of a teacher.This mode of learning contains basic steps as follows:First, the machine is trained using a predefined data also called ‘labeled’ data.Then, the correct answer is fed into the computer which allows it to understand what the right and wrong answers should be.Lastly, the system is given a new set of data or unlabelled data, which it would now analyse using techniques such as classification and regression to predict the correct outcome for the current unlabelled data.Example:Consider a shape sorting game that kids play. A bunch of different shapes of wooden pieces are given to kids, say of square shape, triangular shape, circular shape and star shape. Assume that all blocks of a similar shape are of a unique colour. First, you teach the kids which shape is what  and then you ask them to do the sorting on their own.Similarly, in machine learning, you teach the machine through labelled data. Then, the machine is given some unknown data, which it analyses based on the previous labelled data and gives the correct outcome.In this case, if you observe, two techniques have been used.Classification: Based on colors.Regression: Based on shapes.As a further explanation,Classification: A classification problem is when the output variable is a category, such as “Red” or “blue” or “disease” and “no disease”.Regression: A regression problem is when the output variable is a real value, such as “dollars” or “weight”.Unsupervised LearningIn this type of learning, there is no previous knowledge, no previous training, nor a teacher to supervise. This learning is all instantaneous based on the data that is available at that given time.Example:Consider a kid playing with a mix of tomatoes and capsicums. They would sort them involuntarily based on their shape or color. This is an instantaneous reaction without any predefined set of attributes or training.A machine working on unsupervised learning would produce the results based on a similar mechanism. For this purpose, it uses two algorithms as explained below:Clustering: This involves grouping a cluster of data. For example, this is used in analysing the online customer’s purchase patterns and shopping habits.Association: This involves associating the given items based on the portion of their sizes. For example, analysing that people who bought large number of a given item would also prefer other similar items. Semi-supervised LearningThe name itself says the pattern of this algorithm.It is a hybrid mix of both supervised and unsupervised learning and uses both labelled data and unlabelled data to predict the results.In most occurrences, unlabelled data is given more in quantity than labelled data, because of cost considerations.For example, in a folder of thousands of photographs, the machine sorts pictures based on the maximum number of common features (unsupervised) and already defined names of persons in the pictures, if any(supervised)Reinforcement LearningIn reinforcement learning, there is no correct answer known to the system. The system learns from its own experience through a reinforcement agent. Since the answer is not known, the reinforcement agent decides what to do with the given task and for this it uses its experience from the current situation only.Example: In a robotic game that involves earning the hidden treasure, the algorithm focuses on bringing out the best outcome through trial and error method. Mainly three components are observed in this type of learning: the user, the environment and the action the user is performing. The algorithm adjusts itself accordingly to guide the user towards the best result that can be achieved.The diagram shown below summarizes the four types of learning we have learnt so far:Machine Learning – AlgorithmsMachine learning is rich in algorithms that allow programmers to pick one that best suits the context. Some of the machine learning algorithms are:Neural networksDecision treesRandom forestsSupport vector machinesNearest-neighbor mappingk-means clusteringSelf-organizing mapsExpectation maximizationBayesian networksKernel density estimationPrincipal component analysisSingular value decompositionMachine Learning Tools and LibrariesTo start the journey with machine learning, a learner should have knowledge of tools and libraries that are quintessential to designing machine learning code. Here is a list of such tools and libraries:ToolsProgramming LanguageMachine learning can be coded either using R programming language or Python. Of late, Python has become more popular due to its rich libraries, ease of learning and coding friendliness.IDEMachine learning is widely coded in Jupyter Notebook. It simplifies writing of Python code and embedding plots and charts. Google Colab is another free tool that you can choose for the same purpose.LibrariesScikit-LearnA very popular and beginner friendly library.Supports most of the standard algorithms from supervised and unsupervised learning.Offers models for data pre-processing and result analysis.Limited support for deep learning.TensorFlowSupports Neural networks and deep learning.Bulky compared to scikit learnOffers best computational efficiencySupports many classical algorithms of machine learning.PandasThe data gathering and preparation part of machine learning that we have seen in the stages involved in machine learning is taken care of by Pandas. This library:Gathers and prepares data that other libraries of machine learning can use at a later point in time.Gathers data from any type of data source such as text, SQL DB, MS Excel or JSON files.Contains many statistical functionalities that can be used to work on the data that’s gathered.NumPy and SciPyNumPy supports all array based and linear algebraic functions needed while working on data, while SciPy offers many scientific calculations. NumPy is more widely used in many real time applications of machine learning as compared to SciPy.MatplotlibThis is a machine learning library that has an extensive collection of plots and charts. This library is a collection of many other packages. Of them, Seaborn is the most popular and is widely used to work on data structures.PyTorch and KerasThese are known for their usage in Deep learning.PyTorch library is extensively used for Deep Learning. It is known for its amazingly speedy calculations and is very popular among deep learning programmers.Keras uses other libraries such as Tensor flow and is apt for developing neural networks.Machine Learning – ProcessesBesides algorithms, machine learning offers many tools and processes to pair best with big data. Various such processes and tools that are at hand for developers are:Data quality and managementGUIs that ease models and process flowsData exploration in an interactive modeVisualized outputs for modelsChoosing the best learning model by comparisonModel evaluation done automatically that identifies the best performersUser friendly model deployment and data-to-decision processMachine Learning Use CasesHere is a list of five use cases that are based on machine learning:PayPal: The online money transfers giant uses machine learning for detecting any suspicious activities related to financial transactions.Amazon: The company’s Alexa, the digital assistant, is the best example of speech processing application of machine learning. The online retailing giant is also using machine learning to display recommendation to its customers.Facebook: The social media company is using machine learning extensively to filter out spam posts and forwards, and to shred out poor quality content.IBM: The company’s self-driven vehicle uses machine learning in taking a decision whether to give the driving control to a human or computer.Kaspersky: The anti-virus manufacturing company is using machine learning to detect security breaches, or unknown malware threats and also for high quality endpoint security for businesses.Which Industries Use Machine Learning?As we have seen just now, machine learning is being adopted in many industries for the potential advantages it offers. Machine learning can be applied to any industry that deals with huge volumes of data, and which has many challenges to be answered. For instance, machine learning has been found to be extremely useful to organizations in the following domains which are  making the best use of the technology:PharmaceuticalsPharma industry spends billions of dollars on drug design and testing every year across the globe. Machine learning helps in cutting down such costs and to obtain results with accuracy just by entering the entire data of the drugs and their chemical compounds and comparing with various other parameters.Banks and Financial ServicesThis industry has two major needs to be addressed: attracting investor attention and increasing investments, and staying alert and preventing financial frauds and cyber threats. Machine learning does these two major tasks with ease and accuracy.Health Care and TreatmentsBy predicting the possible  diseases that could affect a patient, based on the medical, genetic and lifestyle data, machine learning helps patients stay alert to probable health threats that they may encounter. Wearable smart devices are an example of the machine learning applications in health care.Online SalesCompanies study the patterns that online shoppers are adopting through machine learning and use the results to display related ads, offers and discounts. Personalisation of internet shopping experience, merchandise supply panning and marketing campaigns are all based on the outcomes of machine learning results themselves.Mining, Oil and GasMachine learning helps in predicting accurately the best location of availability of minerals, gas, oil and other such natural resources, which would otherwise need huge investments, manpower and time.Government SchemesMany governments are taking the help of machine learning to study the interests and needs of their people. They are accordingly using the results in plans and schemes, both for the betterment of people and optimum usage of financial resources.Space Exploration and Science StudiesMachine learning greatly helps in studying stars, planets and finding out the secrets of other celestial bodies with far lesser investments and manpower. Scientists are also maximising the use of machine learning to discover various fascinating facts about the earth and its components.Future of Machine LearningCurrently, machine learning is entering our lives with baby steps. By the next decade, radical changes can be expected in machine learning and the way it impacts our lives. Customers have already started trusting the power and comfort of machine learning, and would definitely welcome more such innovations in the near future.Gartner says:Artificial Intelligence and Machine Learning have reached a critical tipping point and will increasingly augment and extend virtually every technology enabled service, thing, or application.So, it would not be surprising if in the future, machine learning would:Make its entry in almost every aspect of human  lifeBe omnipresent in business and industries, irrespective of their sizeEnter  cloud based servicesBring drastic changes in CPU design keeping in mind the need for computational efficiencyAltogether change the shape of data, its processing and usageChange the way connected systems work and look  owing to the ever increasing data on the internet.ConclusionMachine learning is quite different in its own way. While many experts are raising concerns over the ever increasing dependence and presence of machine learning in our everyday lives, on the positive side, machine learning can work wonders. And the world is already witnessing its magic – in health care, finance industry, automotive industry, image processing and voice recognition and many other fields.While many of us worry that machines may take over the world, it is totally up to us, how we design effective, yet safe and controllable machines. There is no doubt that machine learning would change the way we do many things including education, business and health services making the world a safer and better place.

What is Machine Learning and Why It Matters: Everything You Need to Know

9984
  • by Animikh Aich
  • 26th Apr, 2019
  • Last updated on 11th Mar, 2021
  • 15 mins read
What is Machine Learning and Why It Matters: Everything You Need to Know

If you are a machine learning enthusiast and stay in touch with the latest developments, you would have definitely come across the news “Machine learning identifies links between the world's oceans”. Wait, we all know how complex it would be to analyse a concept such as oceans and their behaviour which would undoubtedly involve billions of data points associated with many critical parameters such as wind velocities, temperatures, earth’s rotation and many such. Doesn’t this piece of information gives you a glimpse of the wondrous possibilities of machine learning and its potential uses? And this is just a drop in the ocean!

As you move across this post, you would get a comprehensive idea of various aspects that you ought to know about machine learning.

What is Machine Learning and Why It Matters?

Machine learning is a segment of artificial intelligence. It is designed to make computers learn by themselves and perform operations without human intervention, when they are exposed to new data. It means a computer or a system designed with machine learning will identify, analyse and change accordingly and give the expected output when it comes across a new pattern of data, without any need of humans.

The power behind machine learning’s self-identification and analysis of new patterns, lies in the complex and powerful ‘pattern recognition’ algorithms that guide them in where to look for what. Thus, the demand for machine learning programmers who have extensive knowledge on working with complex mathematical calculations and applying them to big data and AI is growing year after year.

What is ML and Why It Matters

Machine learning, though a buzz word only since recent times, has conceptually been in existence since World War II when Alan Turing’s Bombe, an enigma deciphering machine was introduced to the world. However, it's only in the past decade or so that there has been such great progress made in context to machine learning and its uses, driven mainly by our quest for making this world more futuristic  with lesser human intervention and more precision. Pharma, education technology, industries, science and space, digital inventions, maps and navigation, robotics – you name the domain and you will have instances of machine learning innovations made in it.

The Timeline of Machine Learning and the Evolution of Machines

Voice activated home appliances, self-driven cars and online marketing campaigns are some of the applications of machine learning that we experience and enjoy the benefit of in our day to day life. However, the development of such amazing inventions date back to decades. Many great mathematicians and futuristic thinkers were involved in the foundation and development of machine learning.

A glimpse of the timeline of machine learning reveals many hidden facts and the efforts of great mathematicians and scientists to whom we should attribute all the fruits that we are enjoying today.

Timeline of Machine Learning and Evolution of Machines

  • 1812- 1913: The century that laid the foundation of machine learning

This age laid the mathematical foundation for the development of machine learning. Bayes’ theorem and Markovs Chains took birth during this period.

  • Late 1940s: First computers 

Computers were recognised as machines that can ‘store data’. The famous Manchester Small-Scale Experimental Machine (nicknamed 'The Manchester Baby') belongs to this era.

  • 1950: The official Birth of Machine Learning

Despite many researches and theoretical studies done prior to this year, it was the year 1950 that is always remembered as the foundation of the machine learning that we are witnessing today. Alan Turing, researcher, mathematician, computer genius and thinker, submitted a paper where he mentioned something called ‘imitation game’ and astonished the world by questioning “Can Machines Think?”. His research grabbed the attention of the BBC which took an exclusive interview with Alan.

  • 1951: The First neural network

The first artificial neural network was built by Marvin Minsky and Dean Edmonds this year. Today, we all know that artificial neural networks play a key role in the thinking process of computers and machines. This should be attributed to the invention made by these two scientists.

  • 1974: Coining of the term ‘Machine Learning’

Though there were no specific terms till then for the things that machines did by thinking on their own, it was in 1974 that the term ‘machine learning’ was termed. Other words such as artificial intelligence, informatics and computational intelligence were also proposed the same year.

  • 1996: Machine beats man in a game of chess

IBM developed its own computer called Deep Blue, that can think. This machine beat the world famous champion in chess, Garry Kasparov. It was then proved to the world that machines can really think like humans.

  • 2006-2017: Backpropagation, external memory access and AlphaGo

Back propagation is an important technique that machines use for image recognition. This technique was developed in this period of time.

Besides in 2014, a neural network developed by DeepMind, a British based company, developed a neural network that can access external memory and get things done.

In 2016, AlphaGo was designed by DeepMind researchers. It beat the world famous Go players Lee Sedol and Ke Jie and proved that machines have come a long way.

  • What’s next?

Scientists are talking about ‘singularity’ –a phenomenon that would occur if humans develop a humanoid robot that could think better than humans and will recreate itself. So far, we have been witnessing how AI is entering our personal lives too in the form of voice activated devices, smart systems and many more. The results of this singularity – we shall have to wait and watch!

Basics of Machine Learning

To put it simply, machine learning involves learning by machines. It means computers learn and there are many concepts, methods, algorithms and processes involved in making this happen. Let us try to understand some of the more important machine learning terms.

Three concepts – artificial intelligence, machine learning and deep learning – are often thought to be synonymous. Though they belong to the same family, conceptually they are different.

Basics of Machine Learning

Machine Learning

It implies that machines can ‘learn on their own’ and give the output without any need of programming explicitly.

Artificial Intelligence

This term means machines can ‘think on their own’ just like humans and take decisions.

Deep Learning

This involves creation of artificial neural networks which can think and act based on algorithms.

How do machines learn?

Quite simply, machines learn just like humans do. Humans learn from their training, experiences and through teachers. Sometimes they use knowledge that is fed into their brains, or sometimes take decisions by analysing the current situation using their past experiences.

Similarly, machines learn from the inputs given to them which tell them which is right and which is wrong. Then they are given data that they would have to analyse based on the training they have received so far. In some other cases, they do not have any idea of which is right or wrong, but just take the decision based on their own experiences. We will analyse the various concepts of learning and the methods involved.

How Machine Learning Works?

The process of machine learning occurs in five steps as shown in the following diagram.

How Machine Learning Works

The steps are explained in simple words below:

  • Gathering the data includes data collection from varied, rich and dense content of various formats and types. In real time, this includes feeding the data from different sources such as text files, word documents or excel sheets.
  • Data preparation involves extracting the actual data out of the entire content fed. Only the data that really makes sense to the machine is used for processing. This step also involves checking for missing data, unwanted data and treatment of outliers.
  • Training involves using an appropriate algorithm and modelling the data. The data filtered in the second step is split into two parts and a part of it is used as training data and the second part is used as reference data. The training data is used to create the model.
  • Evaluating the model includes testing its accuracy. To verify its accuracy to the fullest, the model so developed is tested on such data which is not present in the data during the second step.
  • Finally, the performance of the machine is improved by choosing a different model that suits the different type of data that is present altogether. This is the step where the machine thinks and rethinks in selecting the model best suited for various types of data.

Examples of Machine Learning

The below examples will help you understand where machine learning is used in real time:

Machine Learning Examples

Speech Recognition

Voice based searching and call rerouting are the best examples for speech recognition using machine learning. The principle lies in translating  spoken words into text and then segmenting them on the basis of their frequencies.

Image Recognition

We all use this in day to day life in sorting our pictures on our Google drive or Photos. The main technique that is used here is classifying the pictures based on the intensity (in case of black and white pictures) and measurement of intensities of red, blue and green for coloured images.

Healthcare

Various diagnoses are increasingly made using machine learning these days. Here, various clinical parameters are input to the machine which makes a prognosis  and then predicts the disease status and other health parameters of the person under study.

Financial Services

Machine learning helps in predicting chances of financial fraud, customer’s credit habits, spending patterns etc. The financial and banking sector is also doing market analysis using machine learning techniques.

Machine Learning – Methods

Machine learning is all about machines learning through the inputs provided. This learning is carried out in the following ways:

Supervised Learning

As the name says, the machine learns under supervision. Let’s see how this is done:

  • The entire process of learning takes place in the presence or supervision of a teacher.
  • This mode of learning contains basic steps as follows:
    • First, the machine is trained using a predefined data also called ‘labeled’ data.
    • Then, the correct answer is fed into the computer which allows it to understand what the right and wrong answers should be.
  • Lastly, the system is given a new set of data or unlabelled data, which it would now analyse using techniques such as classification and regression to predict the correct outcome for the current unlabelled data.

Example:

Consider a shape sorting game that kids play. A bunch of different shapes of wooden pieces are given to kids, say of square shape, triangular shape, circular shape and star shape. Assume that all blocks of a similar shape are of a unique colour. First, you teach the kids which shape is what  and then you ask them to do the sorting on their own.

Similarly, in machine learning, you teach the machine through labelled data. Then, the machine is given some unknown data, which it analyses based on the previous labelled data and gives the correct outcome.

In this case, if you observe, two techniques have been used.

  • Classification: Based on colors.
  • Regression: Based on shapes.

As a further explanation,

  • Classification: A classification problem is when the output variable is a category, such as “Red” or “blue” or “disease” and “no disease”.
  • Regression: A regression problem is when the output variable is a real value, such as “dollars” or “weight”.

Unsupervised Learning

  • In this type of learning, there is no previous knowledge, no previous training, nor a teacher to supervise. This learning is all instantaneous based on the data that is available at that given time.

Example:

Consider a kid playing with a mix of tomatoes and capsicums. They would sort them involuntarily based on their shape or color. This is an instantaneous reaction without any predefined set of attributes or training.

A machine working on unsupervised learning would produce the results based on a similar mechanism. For this purpose, it uses two algorithms as explained below:

  • Clustering: This involves grouping a cluster of data. For example, this is used in analysing the online customer’s purchase patterns and shopping habits.
  • Association: This involves associating the given items based on the portion of their sizes. For example, analysing that people who bought large number of a given item would also prefer other similar items. 

Semi-supervised Learning

The name itself says the pattern of this algorithm.

  • It is a hybrid mix of both supervised and unsupervised learning and uses both labelled data and unlabelled data to predict the results.
  • In most occurrences, unlabelled data is given more in quantity than labelled data, because of cost considerations.
  • For example, in a folder of thousands of photographs, the machine sorts pictures based on the maximum number of common features (unsupervised) and already defined names of persons in the pictures, if any(supervised)

Reinforcement Learning

In reinforcement learning, there is no correct answer known to the system. The system learns from its own experience through a reinforcement agent. Since the answer is not known, the reinforcement agent decides what to do with the given task and for this it uses its experience from the current situation only.

Example: In a robotic game that involves earning the hidden treasure, the algorithm focuses on bringing out the best outcome through trial and error method. Mainly three components are observed in this type of learning: the user, the environment and the action the user is performing. The algorithm adjusts itself accordingly to guide the user towards the best result that can be achieved.

The diagram shown below summarizes the four types of learning we have learnt so far:

Types of Machine Learning:- Supervised, Unsupervised, Semi-supervised and Reinforced Learning.

Machine Learning – Algorithms

Machine learning is rich in algorithms that allow programmers to pick one that best suits the context. Some of the machine learning algorithms are:

  • Neural networks
  • Decision trees
  • Random forests
  • Support vector machines
  • Nearest-neighbor mapping
  • k-means clustering
  • Self-organizing maps
  • Expectation maximization
  • Bayesian networks
  • Kernel density estimation
  • Principal component analysis
  • Singular value decomposition

Machine Learning Tools and Libraries

To start the journey with machine learning, a learner should have knowledge of tools and libraries that are quintessential to designing machine learning code. Here is a list of such tools and libraries:

Tools

Programming Language

Machine learning can be coded either using R programming language or Python. Of late, Python has become more popular due to its rich libraries, ease of learning and coding friendliness.

IDE

Machine learning is widely coded in Jupyter Notebook. It simplifies writing of Python code and embedding plots and charts. Google Colab is another free tool that you can choose for the same purpose.

Libraries

Scikit-Learn

  • A very popular and beginner friendly library.
  • Supports most of the standard algorithms from supervised and unsupervised learning.
  • Offers models for data pre-processing and result analysis.
  • Limited support for deep learning.

TensorFlow

  • Supports Neural networks and deep learning.
  • Bulky compared to scikit learn
  • Offers best computational efficiency
  • Supports many classical algorithms of machine learning.

Pandas

The data gathering and preparation part of machine learning that we have seen in the stages involved in machine learning is taken care of by Pandas. This library:

  • Gathers and prepares data that other libraries of machine learning can use at a later point in time.
  • Gathers data from any type of data source such as text, SQL DB, MS Excel or JSON files.
  • Contains many statistical functionalities that can be used to work on the data that’s gathered.

NumPy and SciPy

NumPy supports all array based and linear algebraic functions needed while working on data, while SciPy offers many scientific calculations. NumPy is more widely used in many real time applications of machine learning as compared to SciPy.

Matplotlib

This is a machine learning library that has an extensive collection of plots and charts. This library is a collection of many other packages. Of them, Seaborn is the most popular and is widely used to work on data structures.

PyTorch and Keras

These are known for their usage in Deep learning.

  • PyTorch library is extensively used for Deep Learning. It is known for its amazingly speedy calculations and is very popular among deep learning programmers.
  • Keras uses other libraries such as Tensor flow and is apt for developing neural networks.

Tools and Libraries of Machine Learning

Machine Learning – Processes

Besides algorithms, machine learning offers many tools and processes to pair best with big data. Various such processes and tools that are at hand for developers are:

  • Data quality and management
  • GUIs that ease models and process flows
  • Data exploration in an interactive mode
  • Visualized outputs for models
  • Choosing the best learning model by comparison
  • Model evaluation done automatically that identifies the best performers
  • User friendly model deployment and data-to-decision process

Machine Learning Use Cases

Here is a list of five use cases that are based on machine learning:

  • PayPal: The online money transfers giant uses machine learning for detecting any suspicious activities related to financial transactions.
  • Amazon: The company’s Alexa, the digital assistant, is the best example of speech processing application of machine learning. The online retailing giant is also using machine learning to display recommendation to its customers.
  • Facebook: The social media company is using machine learning extensively to filter out spam posts and forwards, and to shred out poor quality content.
  • IBM: The company’s self-driven vehicle uses machine learning in taking a decision whether to give the driving control to a human or computer.
  • Kaspersky: The anti-virus manufacturing company is using machine learning to detect security breaches, or unknown malware threats and also for high quality endpoint security for businesses.

Which Industries Use Machine Learning?

As we have seen just now, machine learning is being adopted in many industries for the potential advantages it offers. Machine learning can be applied to any industry that deals with huge volumes of data, and which has many challenges to be answered. For instance, machine learning has been found to be extremely useful to organizations in the following domains which are  making the best use of the technology:

Pharmaceuticals

Pharma industry spends billions of dollars on drug design and testing every year across the globe. Machine learning helps in cutting down such costs and to obtain results with accuracy just by entering the entire data of the drugs and their chemical compounds and comparing with various other parameters.

Banks and Financial Services

This industry has two major needs to be addressed: attracting investor attention and increasing investments, and staying alert and preventing financial frauds and cyber threats. Machine learning does these two major tasks with ease and accuracy.

Health Care and Treatments

By predicting the possible  diseases that could affect a patient, based on the medical, genetic and lifestyle data, machine learning helps patients stay alert to probable health threats that they may encounter. Wearable smart devices are an example of the machine learning applications in health care.

Online Sales

Companies study the patterns that online shoppers are adopting through machine learning and use the results to display related ads, offers and discounts. Personalisation of internet shopping experience, merchandise supply panning and marketing campaigns are all based on the outcomes of machine learning results themselves.

Mining, Oil and Gas

Machine learning helps in predicting accurately the best location of availability of minerals, gas, oil and other such natural resources, which would otherwise need huge investments, manpower and time.

Government Schemes

Many governments are taking the help of machine learning to study the interests and needs of their people. They are accordingly using the results in plans and schemes, both for the betterment of people and optimum usage of financial resources.

Space Exploration and Science Studies

Machine learning greatly helps in studying stars, planets and finding out the secrets of other celestial bodies with far lesser investments and manpower. Scientists are also maximising the use of machine learning to discover various fascinating facts about the earth and its components.

Future of Machine Learning

Future of Machine Learning

Currently, machine learning is entering our lives with baby steps. By the next decade, radical changes can be expected in machine learning and the way it impacts our lives. Customers have already started trusting the power and comfort of machine learning, and would definitely welcome more such innovations in the near future.

Gartner says:

Artificial Intelligence and Machine Learning have reached a critical tipping point and will increasingly augment and extend virtually every technology enabled service, thing, or application.

So, it would not be surprising if in the future, machine learning would:

  • Make its entry in almost every aspect of human  life
  • Be omnipresent in business and industries, irrespective of their size
  • Enter  cloud based services
  • Bring drastic changes in CPU design keeping in mind the need for computational efficiency
  • Altogether change the shape of data, its processing and usage
  • Change the way connected systems work and look  owing to the ever increasing data on the internet.

Conclusion

Machine Learning can work wonders

Machine learning is quite different in its own way. While many experts are raising concerns over the ever increasing dependence and presence of machine learning in our everyday lives, on the positive side, machine learning can work wonders. And the world is already witnessing its magic – in health care, finance industry, automotive industry, image processing and voice recognition and many other fields.

While many of us worry that machines may take over the world, it is totally up to us, how we design effective, yet safe and controllable machines. There is no doubt that machine learning would change the way we do many things including education, business and health services making the world a safer and better place.

Animikh

Animikh Aich

Computer Vision Engineer

Animikh Aich is a Deep Learning enthusiast, currently working as a Computer Vision Engineer. His work includes three International Conference publications and several projects based on Computer Vision and Machine Learning.

Join the Discussion

Your email address will not be published. Required fields are marked *

3 comments

vintage House restaurant 09 May 2019

Greetings! Very helpful advice within this article! It's the little changes that produce the greatest changes. Thanks a lot for sharing!

Aditya 21 Jun 2019

Excellent web site difficult to find high quality writing like yours nowadays,I honestly appreciate people like you! Take care

amith singh 06 Aug 2019

Hi, I read the complete blog and got full details of machine learning. It has been presented in such a way that anyone from a development background can understand easily. Thank you for the wonderful blog. Thank you Knowledgehut!

Suggested Blogs

Role of Unstructured Data in Data Science

Data has become the new game changer for businesses. Typically, data scientists categorize data into three broad divisions - structured, semi-structured, and unstructured data. In this article, you will get to know about unstructured data, sources of unstructured data, unstructured data vs. structured data, the use of structured and unstructured data in machine learning, and the difference between structured and unstructured data. Let us first understand what is unstructured data with examples. What is unstructured data? Unstructured data is a kind of data format where there is no organized form or type of data. Videos, texts, images, document files, audio materials, email contents and more are considered to be unstructured data. It is the most copious form of business data, and cannot be stored in a structured database or relational database. Some examples of unstructured data are the photos we post on social media platforms, the tagging we do, the multimedia files we upload, and the documents we share. Seagate predicts that the global data-sphere will expand to 163 zettabytes by 2025, where most of the data will be in the unstructured format. Characteristics of Unstructured DataUnstructured data cannot be organized in a predefined fashion, and is not a homogenous data model. This makes it difficult to manage. Apart from that, these are the other characteristics of unstructured data. You cannot store unstructured data in the form of rows and columns as we do in a database table. Unstructured data is heterogeneous in structure and does not have any specific data model. The creation of such data does not follow any semantics or habits. Due to the lack of any particular sequence or format, it is difficult to manage. Such data does not have an identifiable structure. Sources of Unstructured Data There are various sources of unstructured data. Some of them are: Content websites Social networking sites Online images Memos Reports and research papers Documents, spreadsheets, and presentations Audio mining, chatbots Surveys Feedback systems Advantages of Unstructured Data Unstructured data has become exceptionally easy to store because of MongoDB, Cassandra, or even using JSON. Modern NoSQL databases and software allows data engineers to collect and extract data from various sources. There are numerous benefits that enterprises and businesses can gain from unstructured data. These are: With the advent of unstructured data, we can store data that lacks a proper format or structure. There is no fixed schema or data structure for storing such data, which gives flexibility in storing data of different genres. Unstructured data is much more portable by nature. Unstructured data is scalable and flexible to store. Database systems like MongoDB, Cassandra, etc., can easily handle the heterogeneous properties of unstructured data. Different applications and platforms produce unstructured data that becomes useful in business intelligence, unstructured data analytics, and various other fields. Unstructured data analysis allows finding comprehensive data stories from data like email contents, website information, social media posts, mobile data, cache files and more. Unstructured data, along with data analytics, helps companies improve customer experience. Detection of the taste of consumers and their choices becomes easy because of unstructured data analysis. Disadvantages of Unstructured data Storing and managing unstructured data is difficult because there is no proper structure or schema. Data indexing is also a substantial challenge and hence becomes unclear due to its disorganized nature. Search results from an unstructured dataset are also not accurate because it does not have predefined attributes. Data security is also a challenge due to the heterogeneous form of data. Problems faced and solutions for storing unstructured data. Until recently, it was challenging to store, evaluate, and manage unstructured data. But with the advent of modern data analysis tools, algorithms, CAS (content addressable storage system), and big data technologies, storage and evaluation became easy. Let us first take a look at the various challenges used for storing unstructured data. Storing unstructured data requires a large amount of space. Indexing of unstructured data is a hectic task. Database operations such as deleting and updating become difficult because of the disorganized nature of the data. Storing and managing video, audio, image file, emails, social media data is also challenging. Unstructured data increases the storage cost. For solving such issues, there are some particular approaches. These are: CAS system helps in storing unstructured data efficiently. We can preserve unstructured data in XML format. Developers can store unstructured data in an RDBMS system supporting BLOB. We can convert unstructured data into flexible formats so that evaluating and storage becomes easy. Let us now understand the differences between unstructured data vs. structured data. Unstructured Data Vs. Structured Data In this section, we will understand the difference between structured and unstructured data with examples. STRUCTUREDUNSTRUCTUREDStructured data resides in an organized format in a typical database.Unstructured data cannot reside in an organized format, and hence we cannot store it in a typical database.We can store structured data in SQL database tables having rows and columns.Storing and managing unstructured data requires specialized databases, along with a variety of business intelligence and analytics applications.It is tough to scale a database schema.It is highly scalable.Structured data gets generated in colleges, universities, banks, companies where people have to deal with names, date of birth, salary, marks and so on.We generate or find unstructured data in social media platforms, emails, analyzed data for business intelligence, call centers, chatbots and so on.Queries in structured data allow complex joining.Unstructured data allows only textual queries.The schema of a structured dataset is less flexible and dependent.An unstructured dataset is flexible but does not have any particular schema.It has various concurrency techniques.It has no concurrency techniques.We can use SQL, MySQL, SQLite, Oracle DB, Teradata to store structured data.We can use NoSQL (Not Only SQL) to store unstructured data.Types of Unstructured Data Do you have any idea just how much of unstructured data we produce and from what sources? Unstructured data includes all those forms of data that we cannot actively manage in an RDBMS system that is a transactional system. We can store structured data in the form of records. But this is not the case with unstructured data. Before the advent of object-based storage, most of the unstructured data was stored in file-based systems. Here are some of the types of unstructured data. Rich media content: Entertainment files, surveillance data, multimedia email attachments, geospatial data, audio files (call center and other recorded audio), weather reports (graphical), etc., comes under this genre. Document data: Invoices, text-file records, email contents, productivity applications, etc., are included under this genre. Internet of Things (IoT) data: Ticker data, sensor data, data from other IoT devices come under this genre. Apart from all these, data from business intelligence and analysis, machine learning datasets, and artificial intelligence data training datasets are also a separate genre of unstructured data. Examples of Unstructured Data There are various sources from where we can obtain unstructured data. The prominent use of this data is in unstructured data analytics. Let us now understand what are some examples of unstructured data and their sources – Healthcare industries generate a massive volume of human as well as machine-generated unstructured data. Human-generated unstructured data could be in the form of patient-doctor or patient-nurse conversations, which are usually recorded in audio or text formats. Unstructured data generated by machines includes emergency video camera footage, surgical robots, data accumulated from medical imaging devices like endoscopes, laparoscopes and more.  Social Media is an intrinsic entity of our daily life. Billions of people come together to join channels, share different thoughts, and exchange information with their loved ones. They create and share such data over social media platforms in the form of images, video clips, audio messages, tagging people (this helps companies to map relations between two or more people), entertainment data, educational data, geolocations, texts, etc. Other spectra of data generated from social media platforms are behavior patterns, perceptions, influencers, trends, news, and events. Business and corporate documents generate a multitude of unstructured data such as emails, presentations, reports containing texts, images, presentation reports, video contents, feedback and much more. These documents help to create knowledge repositories within an organization to make better implicit operations. Live chat, video conferencing, web meeting, chatbot-customer messages, surveillance data are other prominent examples of unstructured data that companies can cultivate to get more insights into the details of a person. Some prominent examples of unstructured data used in enterprises and organizations are: Reports and documents, like Word files or PDF files Multimedia files, such as audio, images, designed texts, themes, and videos System logs Medical images Flat files Scanned documents (which are images that hold numbers and text – for example, OCR) Biometric data Unstructured Data Analytics Tools  You might be wondering what tools can come into use to gather and analyze information that does not have a predefined structure or model. Various tools and programming languages use structured and unstructured data for machine learning and data analysis. These are: Tableau MonkeyLearn Apache Spark SAS Python MS. Excel RapidMiner KNIME QlikView Python programming R programming Many cloud services (like Amazon AWS, Microsoft Azure, IBM Cloud, Google Cloud) also offer unstructured data analysis solutions bundled with their services. How to analyze unstructured data? In the past, the process of storage and analysis of unstructured data was not well defined. Enterprises used to carry out this kind of analysis manually. But with the advent of modern tools and programming languages, most of the unstructured data analysis methods became highly advanced. AI-powered tools use algorithms designed precisely to help to break down unstructured data for analysis. Unstructured data analytics tools, along with Natural language processing (NLP) and machine learning algorithms, help advanced software tools analyze and extract analytical data from the unstructured datasets. Before using these tools for analyzing unstructured data, you must properly go through a few steps and keep these points in mind. Set a clear goal for analyzing the data: It is essential to clear your intention about what insights you want to extract from your unstructured data. Knowing this will help you distinguish what type of data you are planning to accumulate. Collect relevant data: Unstructured data is available everywhere, whether it's a social media platform, online feedback or reviews, or a survey form. Depending on the previous point, that is your goal - you have to be precise about what data you want to collect in real-time. Also, keep in mind whether your collected details are relevant or not. Clean your data: Data cleaning or data cleansing is a significant process to detect corrupt or irrelevant data from the dataset, followed by modifying or deleting the coarse and sloppy data. This phase is also known as the data-preprocessing phase, where you have to reduce the noise, carry out data slicing for meaningful representation, and remove unnecessary data. Use Technology and tools: Once you perform the data cleaning, it is time to utilize unstructured data analysis tools to prepare and cultivate the insights from your data. Technologies used for unstructured data storage (NoSQL) can help in managing your flow of data. Other tools and programming libraries like Tableau, Matplotlib, Pandas, and Google Data Studio allows us to extract and visualize unstructured data. Data can be visualized and presented in the form of compelling graphs, plots, and charts. How to Extract information from Unstructured Data? With the growth in digitization during the information era, repetitious transactions in data cause data flooding. The exponential accretion in the speed of digital data creation has brought a whole new domain of understanding user interaction with the online world. According to Gartner, 80% of the data created by an organization or its application is unstructured. While extracting exact information through appropriate analysis of organized data is not yet possible, even obtaining a decent sense of this unstructured data is quite tough. Until now, there are no perfect tools to analyze unstructured data. But algorithms and tools designed using machine learning, Natural language processing, Deep learning, and Graph Analysis (a mathematical method for estimating graph structures) help us to get the upper hand in extracting information from unstructured data. Other neural network models like modern linguistic models follow unsupervised learning techniques to gain a good 'knowledge' about the unstructured dataset before going into a specific supervised learning step. AI-based algorithms and technologies are capable enough to extract keywords, locations, phone numbers, analyze image meaning (through digital image processing). We can then understand what to evaluate and identify information that is essential to your business. ConclusionUnstructured data is found abundantly from sources like documents, records, emails, social media posts, feedbacks, call-records, log-in session data, video, audio, and images. Manually analyzing unstructured data is very time-consuming and can be very boring at the same time. With the growth of data science and machine learning algorithms and models, it has become easy to gather and analyze insights from unstructured information.  According to some research, data analytics tools like MonkeyLearn Studio, Tableau, RapidMiner help analyze unstructured data 1200x faster than the manual approach. Analyzing such data will help you learn more about your customers as well as competitors. Text analysis software, along with machine learning models, will help you dig deep into such datasets and make you gain an in-depth understanding of the overall scenario with fine-grained analyses.
5745
Role of Unstructured Data in Data Science

Data has become the new game changer for busines... Read More

What Is Statistical Analysis and Its Business Applications?

Statistics is a science concerned with collection, analysis, interpretation, and presentation of data. In Statistics, we generally want to study a population. You may consider a population as a collection of things, persons, or objects under experiment or study. It is usually not possible to gain access to all of the information from the entire population due to logistical reasons. So, when we want to study a population, we generally select a sample. In sampling, we select a portion (or subset) of the larger population and then study the portion (or the sample) to learn about the population. Data is the result of sampling from a population.Major ClassificationThere are two basic branches of Statistics – Descriptive and Inferential statistics. Let us understand the two branches in brief. Descriptive statistics Descriptive statistics involves organizing and summarizing the data for better and easier understanding. Unlike Inferential statistics, Descriptive statistics seeks to describe the data, however, it does not attempt to draw inferences from the sample to the whole population. We simply describe the data in a sample. It is not developed on the basis of probability unlike Inferential statistics. Descriptive statistics is further broken into two categories – Measure of Central Tendency and Measures of Variability. Inferential statisticsInferential statistics is the method of estimating the population parameter based on the sample information. It applies dimensions from sample groups in an experiment to contrast the conduct group and make overviews on the large population sample. Please note that the inferential statistics are effective and valuable only when examining each member of the group is difficult. Let us understand Descriptive and Inferential statistics with the help of an example. Task – Suppose, you need to calculate the score of the players who scored a century in a cricket tournament.  Solution: Using Descriptive statistics you can get the desired results.   Task – Now, you need the overall score of the players who scored a century in the cricket tournament.  Solution: Applying the knowledge of Inferential statistics will help you in getting your desired results.  Top Five Considerations for Statistical Data AnalysisData can be messy. Even a small blunder may cost you a fortune. Therefore, special care when working with statistical data is of utmost importance. Here are a few key takeaways you must consider to minimize errors and improve accuracy. Define the purpose and determine the location where the publication will take place.  Understand the assets to undertake the investigation. Understand the individual capability of appropriately managing and understanding the analysis.  Determine whether there is a need to repeat the process.  Know the expectation of the individuals evaluating reviewing, committee, and supervision. Statistics and ParametersDetermining the sample size requires understanding statistics and parameters. The two being very closely related are often confused and sometimes hard to distinguish.  StatisticsA statistic is merely a portion of a target sample. It refers to the measure of the values calculated from the population.  A parameter is a fixed and unknown numerical value used for describing the entire population. The most commonly used parameters are: Mean Median Mode Mean :  The mean is the average or the most common value in a data sample or a population. It is also referred to as the expected value. Formula: Sum of the total number of observations/the number of observations. Experimental data set: 2, 4, 6, 8, 10, 12, 14, 16, 18, 20  Calculating mean:   (2 + 4 + 6 + 8 + 10 + 12 + 14 + 16 + 18 + 20)/10  = 110/10   = 11 Median:  In statistics, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. It’s the mid-value obtained by arranging the data in increasing order or descending order. Formula:  Let n be the data set (increasing order) When data set is odd: Median = n+1/2th term Case-I: (n is odd)  Experimental data set = 1, 2, 3, 4, 5  Median (n = 5) = [(5 +1)/2]th term      = 6/2 term       = 3rd term   Therefore, the median is 3 When data set is even: Median = [n/2th + (n/2 + 1)th] /2 Case-II: (n is even)  Experimental data set = 1, 2, 3, 4, 5, 6   Median (n = 6) = [n/2th + (n/2 + 1)th]/2     = ( 6/2th + (6/2 +1)th]/2     = (3rd + 4th)/2      = (3 + 4)/2      = 7/2      = 3.5  Therefore, the median is 3.5 Mode: The mode is the value that appears most often in a set of data or a population. Experimental data set= 1, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4,4,5, 6  Mode = 3 (Since 3 is the most repeated element in the sequence.) Terms Used to Describe DataWhen working with data, you will need to search, inspect, and characterize them. To understand the data in a tech-savvy and straightforward way, we use a few statistical terms to denote them individually or in groups.  The most frequently used terms used to describe data include data point, quantitative variables, indicator, statistic, time-series data, variable, data aggregation, time series, dataset, and database. Let us define each one of them in brief: Data points: These are the numerical files formed and organized for interpretations. Quantitative variables: These variables present the information in digit form.  Indicator: An indicator explains the action of a community's social-economic surroundings.  Time-series data: The time-series defines the sequential data.  Data aggregation: A group of data points and data set. Database: A group of arranged information for examination and recovery.  Time-series: A set of measures of a variable documented over a specified time. Step-by-Step Statistical Analysis ProcessThe statistical analysis process involves five steps followed one after another. Step 1: Design the study and find the population of the study. Step 2: Collect data as samples. Step 3: Describe the data in the sample. Step 4: Make inferences with the help of samples and calculations Step 5: Take action Data distributionData distribution is an entry that displays entire imaginable readings of data. It shows how frequently a value occurs. Distributed data is always in ascending order, charts, and graphs enabling visibility of measurements and frequencies. The distribution function displaying the density of values of reading is known as the probability density function. Percentiles in data distributionA percentile is the reading in a distribution with a specified percentage of clarifications under it.  Let us understand percentiles with the help of an example.  Suppose you have scored 90th percentile on a math test. A basic interpretation is that merely 4-5% of the scores were higher than your scores. Right? The median is 50th percentile because the assumed 50% of the values are higher than the median. Dispersion Dispersion explains the magnitude of distribution readings anticipated for a specific variable and multiple unique statistics like range, variance, and standard deviation. For instance, high values of a data set are widely scattered while small values of data are firmly clustered. Histogram The histogram is a pictorial display that arranges a group of data facts into user detailed ranges. A histogram summarizes a data series into a simple interpreted graphic by obtaining many data facts and combining them into reasonable ranges. It contains a variety of results into columns on the x-axis. The y axis displays percentages of data for each column and is applied to picture data distributions. Bell Curve distribution Bell curve distribution is a pictorial representation of a probability distribution whose fundamental standard deviation obtained from the mean makes the bell, shaped curving. The peak point on the curve symbolizes the maximum likely occasion in a pattern of data. The other possible outcomes are symmetrically dispersed around the mean, making a descending sloping curve on both sides of the peak. The curve breadth is therefore known as the standard deviation. Hypothesis testingHypothesis testing is a process where experts experiment with a theory of a population parameter. It aims to evaluate the credibility of a hypothesis using sample data. The five steps involved in hypothesis testing are:  Identify the no outcome hypothesis.  (A worthless or a no-output hypothesis has no outcome, connection, or dissimilarities amongst many factors.) Identify the alternative hypothesis.  Establish the importance level of the hypothesis.  Estimate the experiment statistic and equivalent P-value. P-value explains the possibility of getting a sample statistic.  Sketch a conclusion to interpret into a report about the alternate hypothesis. Types of variablesA variable is any digit, amount, or feature that is countable or measurable. Simply put, it is a variable characteristic that varies. The six types of variables include the following: Dependent variableA dependent variable has values that vary according to the value of another variable known as the independent variable.  Independent variableAn independent variable on the other side is controllable by experts. Its reports are recorded and equated.  Intervening variableAn intervening variable explicates fundamental relations between variables. Moderator variableA moderator variable upsets the power of the connection between dependent and independent variables.  Control variableA control variable is anything restricted to a research study. The values are constant throughout the experiment. Extraneous variableExtraneous variable refers to the entire variables that are dependent but can upset experimental outcomes. Chi-square testChi-square test records the contrast of a model to actual experimental data. Data is unsystematic, underdone, equally limited, obtained from independent variables, and a sufficient sample. It relates the size of any inconsistencies among the expected outcomes and the actual outcomes, provided with the sample size and the number of variables in the connection. Types of FrequenciesFrequency refers to the number of repetitions of reading in an experiment in a given time. Three types of frequency distribution include the following: Grouped, ungrouped Cumulative, relative Relative cumulative frequency distribution. Features of FrequenciesThe calculation of central tendency and position (median, mean, and mode). The measure of dispersion (range, variance, and standard deviation). Degree of symmetry (skewness). Peakedness (kurtosis). Correlation MatrixThe correlation matrix is a table that shows the correlation coefficients of unique variables. It is a powerful tool that summarises datasets points and picture sequences in the provided data. A correlation matrix includes rows and columns that display variables. Additionally, the correlation matrix exploits in aggregation with other varieties of statistical analysis. Inferential StatisticsInferential statistics use random data samples for demonstration and to create inferences. They are measured when analysis of each individual of a whole group is not likely to happen. Applications of Inferential StatisticsInferential statistics in educational research is not likely to sample the entire population that has summaries. For instance, the aim of an investigation study may be to obtain whether a new method of learning mathematics develops mathematical accomplishment for all students in a class. Marketing organizations: Marketing organizations use inferential statistics to dispute a survey and request inquiries. It is because carrying out surveys for all the individuals about merchandise is not likely. Finance departments: Financial departments apply inferential statistics for expected financial plan and resources expenses, especially when there are several indefinite aspects. However, economists cannot estimate all that use possibility. Economic planning: In economic planning, there are potent methods like index figures, time series investigation, and estimation. Inferential statistics measures national income and its components. It gathers info about revenue, investment, saving, and spending to establish links among them. Key TakeawaysStatistical analysis is the gathering and explanation of data to expose sequences and tendencies.   Two divisions of statistical analysis are statistical and non-statistical analyses.  Descriptive and Inferential statistics are the two main categories of statistical analysis. Descriptive statistics describe data, whereas Inferential statistics equate dissimilarities between the sample groups.  Statistics aims to teach individuals how to use restricted samples to generate intellectual and precise results for a large group.   Mean, median, and mode are the statistical analysis parameters used to measure central tendency.   Conclusion Statistical analysis is the procedure of gathering and examining data to recognize sequences and trends. It uses random samples of data obtained from a population to demonstrate and create inferences on a group. Inferential statistics applies economic planning with potent methods like index figures, time series investigation, and estimation.  Statistical analysis finds its applications in all the major sectors – marketing, finance, economic, operations, and data mining. Statistical analysis aids marketing organizations in disputing a survey and requesting inquiries concerning their merchandise. 
5881
What Is Statistical Analysis and Its Business Appl...

Statistics is a science concerned with collection,... Read More

Measures of Dispersion: All You Need to Know

What is Dispersion in StatisticsDispersion in statistics is a way of describing how spread out a set of data is. Dispersion is the state of data getting dispersed, stretched, or spread out in different categories. It involves finding the size of distribution values that are expected from the set of data for the specific variable. The statistical meaning of dispersion is “numeric data that is likely to vary at any instance of average value assumption”.Dispersion of data in Statistics helps one to easily understand the dataset by classifying them into their own specific dispersion criteria like variance, standard deviation, and ranging.Dispersion is a set of measures that helps one to determine the quality of data in an objectively quantifiable manner.The measure of dispersion contains almost the same unit as the quantity being measured. There are many Measures of Dispersion found which help us to get more insights into the data: Range Variance Standard Deviation Skewness IQR  Image SourceTypes of Measure of DispersionThe Measure of Dispersion is divided into two main categories and offer ways of measuring the diverse nature of data. It is mainly used in biological statistics. We can easily classify them by checking whether they contain units or not. So as per the above, we can divide the data into two categories which are: Absolute Measure of Dispersion Relative Measure of DispersionAbsolute Measure of DispersionAbsolute Measure of Dispersion is one with units; it has the same unit as the initial dataset. Absolute Measure of Dispersion is expressed in terms of the average of the dispersion quantities like Standard or Mean deviation. The Absolute Measure of Dispersion can be expressed  in units such as Rupees, Centimetre, Marks, kilograms, and other quantities that are measured depending on the situation. Types of Absolute Measure of Dispersion: Range: Range is the measure of the difference between the largest and smallest value of the data variability. The range is the simplest form of Measure of Dispersion. Example: 1,2,3,4,5,6,7 Range = Highest value – Lowest value   = ( 7 – 1 ) = 6 Mean (μ): Mean is calculated as the average of the numbers. To calculate the Mean, add all the outcomes and then divide it with the total number of terms. Example: 1,2,3,4,5,6,7,8 Mean = (sum of all the terms / total number of terms)                = (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8) / 8                = 36 / 8                = 4.5 Variance (σ2): In simple terms, the variance can be calculated by obtaining the sum of the squared distance of each term in the distribution from the Mean, and then dividing this by the total number of the terms in the distribution.  It basically shows how far a number, for example, a student’s mark in an exam, is from the Mean of the entire class. Formula: (σ2) = ∑ ( X − μ)2 / N Standard Deviation: Standard Deviation can be represented as the square root of Variance. To find the standard deviation of any data, you need to find the variance first. Formula: Standard Deviation = √σ Quartile: Quartiles divide the list of numbers or data into quarters. Quartile Deviation: Quartile Deviation is the measure of the difference between the upper and lower quartile. This measure of deviation is also known as interquartile range. Formula: Interquartile Range: Q3 – Q1. Mean deviation: Mean Deviation is also known as an average deviation; it can be computed using the Mean or Median of the data. Mean deviation is represented as the arithmetic deviation of a different item that follows the central tendency. Formula: As mentioned, the Mean Deviation can be calculated using Mean and Median. Mean Deviation using Mean: ∑ | X – M | / N Mean Deviation using Median: ∑ | X – X1 | / N Relative Measure of DispersionRelative Measures of dispersion are the values without units. A relative measure of dispersion is used to compare the distribution of two or more datasets.  The definition of the Relative Measure of Dispersion is the same as the Absolute Measure of Dispersion; the only difference is the measuring quantity.  Types of Relative Measure of Dispersion: Relative Measure of Dispersion is the calculation of the co-efficient of Dispersion, where 2 series are compared, which differ widely in their average.  The main use of the co-efficient of Dispersion is when 2 series with different measurement units are compared.  1. Co-efficient of Range: it is calculated as the ratio of the difference between the largest and smallest terms of the distribution, to the sum of the largest and smallest terms of the distribution.  Formula: L – S / L + S  where L = largest value S= smallest value 2. Co-efficient of Variation: The coefficient of variation is used to compare the 2 data with respect to homogeneity or consistency.  Formula: C.V = (σ / X) 100 X = standard deviation  σ = mean 3. Co-efficient of Standard Deviation: The co-efficient of Standard Deviation is the ratio of standard deviation with the mean of the distribution of terms.  Formula: σ = ( √( X – X1)) / (N - 1) Deviation = ( X – X1)  σ = standard deviation  N= total number  4. Co-efficient of Quartile Deviation: The co-efficient of Quartile Deviation is the ratio of the difference between the upper quartile and the lower quartile to the sum of the upper quartile and lower quartile.  Formula: ( Q3 – Q3) / ( Q3 + Q1) Q3 = Upper Quartile  Q1 = Lower Quartile 5. Co-efficient of Mean Deviation: The co-efficient of Mean Deviation can be computed using the mean or median of the data. Mean Deviation using Mean: ∑ | X – M | / N Mean Deviation using Mean: ∑ | X – X1 | / N Why dispersion is important in a statisticThe knowledge of dispersion is vital in the understanding of statistics. It helps to understand concepts like the diversification of the data, how the data is spread, how it is maintained, and maintaining the data over the central value or central tendency. Moreover, dispersion in statistics provides us with a way to get better insights into data distribution. For example,  3 distinct samples can have the same Mean, Median, or Range but completely different levels of variability. How to Calculate DispersionDispersion can be easily calculated using various dispersion measures, which are already mentioned in the types of Measure of Dispersion described above. Before measuring the data, it is important to understand the diversion of the terms and variation. One can use the following method to calculate the dispersion: Mean Standard deviation Variance Quartile deviation For example, let us consider two datasets: Data A:97,98,99,100,101,102,103  Data B: 70,80,90,100,110,120,130 On calculating the mean and median of the two datasets, both have the same value, which is 100. However, the rest of the dispersion measures are totally different as measured by the above methods.  The range of B is 10 times higher, for instance. How to represent Dispersion in Statistics Dispersion in Statistics can be represented in the form of graphs and pie-charts. Some of the different ways used include: Dot Plots Box Plots Stems Leaf Plots Example: What is the variance of the values 3,8,6,10,12,9,11,10,12,7?  Variation of the values can be calculated using the following formula: (σ2) = ∑ ( X − μ)2 / N (σ2) = 7.36 What is an example of dispersion? One of the examples of dispersion outside the world of statistics is the rainbow- where white light is split into 7 different colours separated via wavelengths.  Some statistical ways of measuring it are- Standard deviation Range Mean absolute difference Median absolute deviation Interquartile change Average deviation Conclusion: Dispersion in statistics refers to the measure of variability of data or terms. Such variability may give random measurement errors where some of the instrumental measurements are found to be imprecise. It is a statistical way of describing how the terms are spread out in different data sets. The more sets of values, the more scattered data is found, and it is always directly proportional. This range of values can vary from 5 - 10 values to 1000 - 10,000 values. This spread of data is described by the range of descriptive range of statistics. The dispersion in statistics can be represented using a Dot Plot, Box Plot, and other different ways. 
9265
Measures of Dispersion: All You Need to Know

What is Dispersion in StatisticsDispersion in stat... Read More

Useful links