Search

Machine learning Filter

What is Bias-Variance Tradeoff in Machine Learning

What is Machine Learning? Machine Learning is a multidisciplinary field of study, which gives computers the ability to solve complex problems, which otherwise would be nearly impossible to be hand-coded by a human being. Machine Learning is a scientific field of study which involves the use of algorithms and statistics to perform a given task by relying on inference from data instead of explicit instructions. Machine Learning Process:The process of Machine Learning can be broken down into several parts, most of which is based around “Data”. The following steps show the Machine Learning Process. 1. Gathering Data from various sources: Since Machine Learning is basically the inference drawn from data before any algorithm can be used, data needs to be collected from some source. Data collected can be of any form, viz. Video data, Image data, Audio data, Text data, Statistical data, etc. 2. Cleaning data to have homogeneity: The data that is collected from various sources does not always come in the desired form. More importantly, data contains various irregularities like Missing data and Outliers.These irregularities may cause the Machine Learning Model(s) to perform poorly. Hence, the removal or processing of irregularities is necessary to promote data homogeneity. This step is also known as data pre-processing. 3. Model Building & Selecting the right Machine Learning Model: After the data has been correctly pre-processed, various Machine Learning Algorithms (or Models) are applied on the data to train the model to predict on unseen data, as well as to extract various insights from the data. After various models are “trained” to the data, the best performing model(s) that suit the application and the performance criteria are selected.4. Getting Insights from the model’s results: Once the model is selected, further data is used to validate the performance and accuracy of the model and get insights as to how the model performs under various conditions. 5. Data Visualization: This is the final step, where the model is used to predict unseen and real-world data. However, these predictions are not directly understandable to the user, and hence, data Visualization or converting the results into understandable visual graphs is necessary. At this stage, the model can be deployed to solve real-world problems.How is Machine Learning different from Curve Fitting? To get the similarities out of the way, both, Machine Learning and Curve Fitting rely on data to infer a model which, ideally, fits the data perfectly. The difference comes in the availability of the data. Curve Fitting is carried out with data, all of which is already available to the user. Hence, there is no question of the model to encounter unseen data.However, in Machine Learning, only a part of the data is available to the user at the time of training (fitting) the model, and then the model has to perform equally well on data that it has never encountered before. Which is, in other words, the generalization of the model over a given data, such that it is able to correctly predict when it is deployed.A high-level introduction to Bias and Variance through illustrative and applied examples Let’s initiate the idea of Bias and Variance with a case study. Let’s assume a simple dataset of predicting the price of a house based on its carpet area. Here, the x-axis represents the carpet area of the house, and the y-axis represents the price of the property. The plotted data (in a 2D graph) is shown in the graph below: The goal is to build a model to predict the price of the house, given the carpet area of the property. This is a rather easy problem to solve and can easily be achieved by fitting a curve to the given data points. But, for the time being, let’s concentrate on solving the same using Machine Learning.In order to keep this example simple and concentrate on Bias and Variance, a few assumptions are made:Adequate data is present in order to come up with a working model capable of making relatively accurate predictions.The data is homogeneous in nature and hence no major pre-processing steps are involved.There are no missing values or outliers, and hence they do not interfere with the outcome in any way. The y-axis data-points are independent of the order of the sequence of the x-axis data-points.With the above assumptions, the data is processed to train the model using the following steps: 1. Shuffling the data: Since the y-axis data-points are independent of the order of the sequence of the x-axis data-points, the dataset is shuffled in a pseudo-random manner. This is done to avoid unnecessary patterns from being learned by the model. During the shuffling, it is imperative to keep each x-y pair data point constant. Mixing them up will change the dataset itself and the model will learn inaccurate patterns. 2. Data Splitting: The dataset is split into three categories: Training Set (60%), Validation Set (20%), and Testing Set (20%). These three sets are used for different purposes:Training Set - This part of the dataset is used to train the model. It is also known as the Development Set. Validation Set - This is separate from the Training Set and is only used for model selection. The model does not train or learn from this part of the dataset.Testing Set - This part of the dataset is used for performance evaluation and is completely independent of the Training or Validation Sets. Similar to the Validation Set, the model does not train on this part of the dataset.3. Model Selection: Several Machine Learning Models are applied to the Training Set and their Training and Validation Losses are determined, which then helps determine the most appropriate model for the given dataset.During this step, we assume that a polynomial equation fits the data correctly. The general equation is given below: The process of “Training” mathematically is nothing more than figuring out the appropriate values for the parameters: a0, a1, ... ,an, which is done automatically by the model using the Training Set.The developer does have control over how high the degree of the polynomial can be. These parameters that can be tuned by the developer are called Hyperparameters. These hyperparameters play a key role in deciding how well would the model learn and how generalized will the learned parameters be. Given below are two graphs representing the prediction of the trained model on training data. The graph on the left represents a linear model with an error of 3.6, and the graph on the right represents a polynomial model with an error of 1.7. By looking at the errors, it can be concluded that the polynomial model performs significantly better when compared to the linear model (Lower the error, better is the performance of the model). However, when we use the same trained models on the Testing Set, the models perform very differently. The graph on the left represents the same linear model’s prediction on the Testing Set, and the graph on the right side represents the Polynomial model’s prediction on the Testing Set. It is clearly visible that the Polynomial model inaccurately predicts the outputs when compared to the Linear model.In terms of error, the total error for the Linear model is 3.6 and for the Polynomial model is a whopping 929.12. Such a big difference in errors between the Training and Testing Set clearly signifies that something is wrong with the Polynomial model. This drastic change in error is due to a phenomenon called Bias-Variance Tradeoff.What is “Error” in Machine Learning? Error in Machine Learning is the difference in the expected output and the predicted output of the model. It is a measure of how well the model performs over a given set of data.There are several methods to calculate error in Machine Learning. One of the most commonly used terminologies to represent the error is called the Loss/Cost Function. It is also known as the Mean Squared Error (or MSE) and is given by the following equation:The necessity of minimization of Errors: As it is obvious from the previously shown graphs, the higher the error, the worse the model performs. Hence, the error of the prediction of a model can be considered as a performance measure: Lower the error of a model, the better it performs. In addition to that, a model judges its own performance and trains itself based on the error created between its own output and the expected output. The primary target of the model is to minimize the error so as to get the best parameters that would fit the data perfectly. Total Error: The error mentioned above is the Total Error and consists of three types of errors: Bias + Variance + Irreducible Error. Total Error = Bias + Variance + Irreducible ErrorEven for an ideal model, it is impossible to get rid of all the types of errors. The “irreducible” error rate is caused by the presence of noise in the data and hence is not removable. However, the Bias and Variance errors can be reduced to a minimum and hence, the total error can also be reduced significantly. Why is the splitting of data important? Ideally, the complete dataset is not used to train the model. The dataset is split into three sets: Training, Validation and Testing Sets. Each of these serves a specific role in the development of a model which performs well under most conditions.Training Set (60-80%): The largest portion of the dataset is used for training the Machine Learning Model. The model extracts the features and learns to recognize the patterns in the dataset. The quality and quantity of the training set determines how well the model is going to perform. Testing Set (15-25%): The main goal of every Machine Learning Engineer is to develop a model which would generalize the best over a given dataset. This is achieved by training the model(s) on a portion of the dataset and testing its performance by applying the trained model on another portion of the same/similar dataset that has not been used during training (Testing Set). This is important since the model might perform too well on the training set, but perform poorly on unseen data, as was the case with the example given above. Testing set is primarily used for model performance evaluation.Validation Set (15-25%): In addition to the above, because of the presence of more than one Machine Learning Algorithm (model), it is often not recommended to test the performance of multiple models on the same dataset and then choose the best one. This process is called Model Selection, and for this, a separate part of the training set is used, which is also known as Validation Set. A validation set behaves similar to a testing set but is primarily used in model selection and not in performance evaluation.Bias and Variance - A Technical Introduction What is Bias?Bias is used to allow the Machine Learning Model to learn in a simplified manner. Ideally, the simplest model that is able to learn the entire dataset and predict correctly on it is the best model. Hence, bias is introduced into the model in the view of achieving the simplest model possible.Parameter based learning algorithms usually have high bias and hence are faster to train and easier to understand. However, too much bias causes the model to be oversimplified and hence underfits the data. Hence these models are less flexible and often fail when they are applied on complex problems.Mathematically, it is the difference between the model’s average prediction and the expected value.What is Variance?Variance in data is the variability of the model in a case where different Training Data is used. This would significantly change the estimation of the target function. Statistically, for a given random variable, Variance is the expectation of squared deviation from its mean. In other words, the higher the variance of the model, the more complex the model is and it is able to learn more complex functions. However, if the model is too complex for the given dataset, where a simpler solution is possible, a model with high Variance causes the model to overfit. When the model performs well on the Training Set and fails to perform on the Testing Set, the model is said to have Variance.Characteristics of a biased model A biased model will have the following characteristics:Underfitting: A model with high bias is simpler than it should be and hence tends to underfit the data. In other words, the model fails to learn and acquire the intricate patterns of the dataset. Low Training Accuracy: A biased model will not fit the Training Dataset properly and hence will have low training accuracy (or high training loss). Inability to solve complex problems: A Biased model is too simple and hence is often incapable of learning complex features and solving relatively complex problems.Characteristics of a model with Variance A model with high Variance will have the following characteristics:Overfitting: A model with high Variance will have a tendency to be overly complex. This causes the overfitting of the model.Low Testing Accuracy: A model with high Variance will have very high training accuracy (or very low training loss), but it will have a low testing accuracy (or a low testing loss). Overcomplicating simpler problems: A model with high variance tends to be overly complex and ends up fitting a much more complex curve to a relatively simpler data. The model is thus capable of solving complex problems but incapable of solving simple problems efficiently.What is Bias-Variance Tradeoff? From the understanding of bias and variance individually thus far, it can be concluded that the two are complementary to each other. In other words, if the bias of a model is decreased, the variance of the model automatically increases. The vice-versa is also true, that is if the variance of a model decreases, bias starts to increase.Hence, it can be concluded that it is nearly impossible to have a model with no bias or no variance since decreasing one increases the other. This phenomenon is known as the Bias-Variance TradeA graphical introduction to Bias-Variance Tradeoff In order to get a clear idea about the Bias-Variance Tradeoff, let us consider the bulls-eye diagram. Here, the central red portion of the target can be considered the location where the model correctly predicts the values. As we move away from the central red circle, the error in the prediction starts to increase. Each of the several hits on the target is achieved by repetition of the model building process. Each hit represents the individual realization of the model. As can be seen in the diagram below, the bias and the variance together influence the predictions of the model under different circumstances.Another way of looking at the Bias-Variance Tradeoff graphically is to plot the graphical representation for error, bias, and variance versus the complexity of the model. In the graph shown below, the green dotted line represents variance, the blue dotted line represents bias and the red solid line represents the error in the prediction of the concerned model. Since bias is high for a simpler model and decreases with an increase in model complexity, the line representing bias exponentially decreases as the model complexity increases. Similarly, Variance is high for a more complex model and is low for simpler models. Hence, the line representing variance increases exponentially as the model complexity increases. Finally, it can be seen that on either side, the generalization error is quite high. Both high bias and high variance lead to a higher error rate. The most optimal complexity of the model is right in the middle, where the bias and variance intersect. This part of the graph is shown to produce the least error and is preferred. Also, as discussed earlier, the model underfits for high-bias situations and overfits for high-variance situations.Mathematical Expression of Bias-Variance Tradeoff The expected values is a vector represented by y. The predicted output of the model is denoted by the vector y for input vector x. The relationship between the predicted values and the inputs can be taken as y = f(x) + e, where e is the normally distributed error given by:The third term in the above equation, irreducible_error represents the noise term and cannot be fundamentally reduced by any given model. If hypothetically, infinite data is available, it is possible to tune the model to reduce the bias and variance terms to zero but is not possible to do so practically. Hence, there is always a tradeoff between the minimization of bias and variance. Detection of Bias and Variance of a modelIn model building, it is imperative to have the knowledge to detect if the model is suffering from high bias or high variance. The methods to detect high bias and variance is given below:Detection of High Bias:The model suffers from a very High Training Error.The Validation error is similar in magnitude to the training error.The model is underfitting.Detection of High Variance:The model suffers from a very Low Training Error.The Validation error is very high when compared to the training error.The model is overfitting.A graphical method to Detect a model suffering from High Bias and Variance is shown below: The graph shows the change in error rate with respect to model complexity for training and validation error. The left portion of the graph suffers from High Bias. This can be seen as the training error is quite high along with the validation error. In addition to that, model complexity is quite low. The right portion of the graph suffers from High Variance. This can be seen as the training error is very low, yet the validation error is very high and starts increasing with increasing model complexity.A systematic approach to solve a Bias-Variance Problem by Dr. Andrew Ng:Dr. Andrew Ng proposed a very simple-to-follow step by step architecture to detect and solve a High Bias and High Variance errors in a model. The block diagram is shown below:Detection and Solution to High Bias problem - if the training error is high: Train longer: High bias means a usually less complex model, and hence it requires more training iterations to learn the relevant patterns. Hence, longer training solves the error sometimes.Train a more complex model: As mentioned above, high bias is a result of a less than optimal complexity in the model. Hence, to avoid high bias, the existing model can be swapped out with a more complex model. Obtain more features: It is often possible that the existing dataset lacks the required essential features for effective pattern recognition. To remedy this problem: More features can be collected for the existing data.Feature Engineering can be performed on existing features to extract more non-linear features. Decrease regularization: Regularization is a process to decrease model complexity by regularizing the inputs at different stages, promote generalization and prevent overfitting in the process. Decreasing regularization allows the model to learn the training dataset better. New model architecture: If all of the above-mentioned methods fail to deliver satisfactory results, then it is suggested to try out other new model architectures. Detection and Solution to High Variance problem - if a validation error is high: Obtain more data: High variance is often caused due to a lack of training data. The model complexity and quantity of training data need to be balanced. A model of higher complexity requires a larger quantity of training data. Hence, if the model is suffering from high variance, more datasets can reduce the variance. Decrease number of features: If the dataset consists of too many features for each data-point, the model often starts to suffer from high variance and starts to overfit. Hence, decreasing the number of features is recommended. Increase Regularization: As mentioned above, regularization is a process to decrease model complexity. Hence, if the model is suffering from high variance (which is caused by a complex model), then an increase in regularization can decrease the complexity and help to generalize the model better.New model architecture: Similar to the solution of a model suffering from high bias, if all of the above-mentioned methods fail to deliver satisfactory results, then it is suggested to try out other new model architectures.Conclusion To summarize, Bias and Variance play a major role in the training process of a model. It is necessary to reduce each of these parameters individually to the minimum possible value. However, it should be kept in mind that an effort to decrease one of these parameters beyond a certain limit increases the probability of the other getting increased. This phenomenon is called as the Bias-Variance Tradeoff and is a parameter to consider during model building. 

What is Bias-Variance Tradeoff in Machine Learning

8658
  • by Animikh Aich
  • 25th Jul, 2019
  • Last updated on 11th Mar, 2021
  • 24 mins read
What is Bias-Variance Tradeoff in Machine Learning

What is Machine Learning? 

Machine Learning is a multidisciplinary field of study, which gives computers the ability to solve complex problems, which otherwise would be nearly impossible to be hand-coded by a human being. Machine Learning is a scientific field of study which involves the use of algorithms and statistics to perform a given task by relying on inference from data instead of explicit instructions. 

Machine Learning Process:

Machine Learning Process

The process of Machine Learning can be broken down into several parts, most of which is based around “Data”. The following steps show the Machine Learning Process. 

1. Gathering Data from various sources: Since Machine Learning is basically the inference drawn from data before any algorithm can be used, data needs to be collected from some source. Data collected can be of any form, viz. Video data, Image data, Audio data, Text data, Statistical data, etc.

2. Cleaning data to have homogeneity: The data that is collected from various sources does not always come in the desired form. More importantly, data contains various irregularities like 
Missing data and Outliers.These irregularities may cause the Machine Learning Model(s) to perform poorly. Hence, the removal or processing of irregularities is necessary to promote data homogeneity. This step is also known as data pre-processing.

3. Model Building & Selecting the right Machine Learning Model: After the data has been correctly pre-processed, various Machine Learning Algorithms (or Models) are applied on the data to train the model to predict on unseen data, as well as to extract various insights from the data. After various models are “trained” to the data, the best performing model(s) that suit the application and the performance criteria are selected.

4. Getting Insights from the model’s results: Once the model is selected, further data is used to validate the performance and accuracy of the model and get insights as to how the model performs under various conditions.

5. Data Visualization: This is the final step, where the model is used to predict unseen and real-world data. However, these predictions are not directly understandable to the user, and hence, data Visualization or converting the results into understandable visual graphs is necessary. At this stage, the model can be deployed to solve real-world problems.

How is Machine Learning different from Curve Fitting? 

To get the similarities out of the way, both, Machine Learning and Curve Fitting rely on data to infer a model which, ideally, fits the data perfectly. 

The difference comes in the availability of the data. 

  • Curve Fitting is carried out with data, all of which is already available to the user. Hence, there is no question of the model to encounter unseen data.
  • However, in Machine Learning, only a part of the data is available to the user at the time of training (fitting) the model, and then the model has to perform equally well on data that it has never encountered before. Which is, in other words, the generalization of the model over a given data, such that it is able to correctly predict when it is deployed.

A high-level introduction to Bias and Variance through illustrative and applied examples 

Let’s initiate the idea of Bias and Variance with a case study. Let’s assume a simple dataset of predicting the price of a house based on its carpet area. Here, the x-axis represents the carpet area of the house, and the y-axis represents the price of the property. The plotted data (in a 2D graph) is shown in the graph below: 

Examples of high level introduction to Bias and Variance

The goal is to build a model to predict the price of the house, given the carpet area of the property. This is a rather easy problem to solve and can easily be achieved by fitting a curve to the given data points. But, for the time being, let’s concentrate on solving the same using Machine Learning.

In order to keep this example simple and concentrate on Bias and Variance, a few assumptions are made:

  • Adequate data is present in order to come up with a working model capable of making relatively accurate predictions.
  • The data is homogeneous in nature and hence no major pre-processing steps are involved.
  • There are no missing values or outliers, and hence they do not interfere with the outcome in any way. 
  • The y-axis data-points are independent of the order of the sequence of the x-axis data-points.

With the above assumptions, the data is processed to train the model using the following steps: 

1. Shuffling the data: Since the y-axis data-points are independent of the order of the sequence of the x-axis data-points, the dataset is shuffled in a pseudo-random manner. This is done to avoid unnecessary patterns from being learned by the model. During the shuffling, it is imperative to keep each x-y pair data point constant. Mixing them up will change the dataset itself and the model will learn inaccurate patterns. 

2. Data Splitting: The dataset is split into three categories: Training Set (60%), Validation Set (20%), and Testing Set (20%). These three sets are used for different purposes:

  • Training Set - This part of the dataset is used to train the model. It is also known as the Development Set. 
  • Validation Set - This is separate from the Training Set and is only used for model selection. The model does not train or learn from this part of the dataset.
  • Testing Set - This part of the dataset is used for performance evaluation and is completely independent of the Training or Validation Sets. Similar to the Validation Set, the model does not train on this part of the dataset.

3. Model Selection: Several Machine Learning Models are applied to the Training Set and their Training and Validation Losses are determined, which then helps determine the most appropriate model for the given dataset.
During this step, we assume that a polynomial equation fits the data correctly. The general equation is given below: 

Model Selection Equation

The process of “Training” mathematically is nothing more than figuring out the appropriate values for the parameters: a0, a1, ... ,an, which is done automatically by the model using the Training Set.

The developer does have control over how high the degree of the polynomial can be. These parameters that can be tuned by the developer are called Hyperparameters. These hyperparameters play a key role in deciding how well would the model learn and how generalized will the learned parameters be. 

Given below are two graphs representing the prediction of the trained model on training data. The graph on the left represents a linear model with an error of 3.6, and the graph on the right represents a polynomial model with an error of 1.7. 

Model Selection Graph of Machine Learning:- Linear Model Error and Polynomial Model Error

By looking at the errors, it can be concluded that the polynomial model performs significantly better when compared to the linear model (Lower the error, better is the performance of the model). 

However, when we use the same trained models on the Testing Set, the models perform very differently. The graph on the left represents the same linear model’s prediction on the Testing Set, and the graph on the right side represents the Polynomial model’s prediction on the Testing Set. It is clearly visible that the Polynomial model inaccurately predicts the outputs when compared to the Linear model.

Model Selection Graph of machine Learning with Testing set:- Linear and Polynomial model Error

In terms of error, the total error for the Linear model is 3.6 and for the Polynomial model is a whopping 929.12. 

Such a big difference in errors between the Training and Testing Set clearly signifies that something is wrong with the Polynomial model. This drastic change in error is due to a phenomenon called Bias-Variance Tradeoff.

What is “Error” in Machine Learning? 

Error in Machine Learning is the difference in the expected output and the predicted output of the model. It is a measure of how well the model performs over a given set of data.

There are several methods to calculate error in Machine Learning. One of the most commonly used terminologies to represent the error is called the Loss/Cost Function. It is also known as the Mean Squared Error (or MSE) and is given by the following equation:

Mean Squared Error formula definition

The necessity of minimization of Errors: As it is obvious from the previously shown graphs, the higher the error, the worse the model performs. Hence, the error of the prediction of a model can be considered as a performance measure: Lower the error of a model, the better it performs. 

In addition to that, a model judges its own performance and trains itself based on the error created between its own output and the expected output. The primary target of the model is to minimize the error so as to get the best parameters that would fit the data perfectly. 

Total Error: The error mentioned above is the Total Error and consists of three types of errors: Bias + Variance + Irreducible Error. 

Total Error = Bias + Variance + Irreducible Error

Even for an ideal model, it is impossible to get rid of all the types of errors. The “irreducible” error rate is caused by the presence of noise in the data and hence is not removable. However, the Bias and Variance errors can be reduced to a minimum and hence, the total error can also be reduced significantly. 

Why is the splitting of data important? 

Ideally, the complete dataset is not used to train the model. The dataset is split into three sets: Training, Validation and Testing Sets. Each of these serves a specific role in the development of a model which performs well under most conditions.

Training Set (60-80%): The largest portion of the dataset is used for training the Machine Learning Model. The model extracts the features and learns to recognize the patterns in the dataset. The quality and quantity of the training set determines how well the model is going to perform. 

Testing Set (15-25%): The main goal of every Machine Learning Engineer is to develop a model which would generalize the best over a given dataset. This is achieved by training the model(s) on a portion of the dataset and testing its performance by applying the trained model on another portion of the same/similar dataset that has not been used during training (Testing Set). This is important since the model might perform too well on the training set, but perform poorly on unseen data, as was the case with the example given above. Testing set is primarily used for model performance evaluation.

Validation Set (15-25%): In addition to the above, because of the presence of more than one Machine Learning Algorithm (model), it is often not recommended to test the performance of multiple models on the same dataset and then choose the best one. This process is called Model Selection, and for this, a separate part of the training set is used, which is also known as Validation Set. A validation set behaves similar to a testing set but is primarily used in model selection and not in performance evaluation.

Bias and Variance - A Technical Introduction 

What is Bias?

Bias is used to allow the Machine Learning Model to learn in a simplified manner. Ideally, the simplest model that is able to learn the entire dataset and predict correctly on it is the best model. Hence, bias is introduced into the model in the view of achieving the simplest model possible.

Parameter based learning algorithms usually have high bias and hence are faster to train and easier to understand. However, too much bias causes the model to be oversimplified and hence underfits the data. Hence these models are less flexible and often fail when they are applied on complex problems.

Mathematically, it is the difference between the model’s average prediction and the expected value.

What is Variance?

Variance in data is the variability of the model in a case where different Training Data is used. This would significantly change the estimation of the target function. Statistically, for a given random variable, Variance is the expectation of squared deviation from its mean. 

In other words, the higher the variance of the model, the more complex the model is and it is able to learn more complex functions. However, if the model is too complex for the given dataset, where a simpler solution is possible, a model with high Variance causes the model to overfit. 

When the model performs well on the Training Set and fails to perform on the Testing Set, the model is said to have Variance.

Characteristics of a biased model 

A biased model will have the following characteristics:

  • Underfitting: A model with high bias is simpler than it should be and hence tends to underfit the data. In other words, the model fails to learn and acquire the intricate patterns of the dataset. 
  • Low Training Accuracy: A biased model will not fit the Training Dataset properly and hence will have low training accuracy (or high training loss). 
  • Inability to solve complex problems: A Biased model is too simple and hence is often incapable of learning complex features and solving relatively complex problems.

Characteristics of a model with Variance 

A model with high Variance will have the following characteristics:

  • Overfitting: A model with high Variance will have a tendency to be overly complex. This causes the overfitting of the model.
  • Low Testing Accuracy: A model with high Variance will have very high training accuracy (or very low training loss), but it will have a low testing accuracy (or a low testing loss). 
  • Overcomplicating simpler problems: A model with high variance tends to be overly complex and ends up fitting a much more complex curve to a relatively simpler data. The model is thus capable of solving complex problems but incapable of solving simple problems efficiently.

What is Bias-Variance Tradeoff? 

Bias-variance Tradeoff Graph

From the understanding of bias and variance individually thus far, it can be concluded that the two are complementary to each other. In other words, if the bias of a model is decreased, the variance of the model automatically increases. The vice-versa is also true, that is if the variance of a model decreases, bias starts to increase.

Hence, it can be concluded that it is nearly impossible to have a model with no bias or no variance since decreasing one increases the other. This phenomenon is known as the Bias-Variance Trade

A graphical introduction to Bias-Variance Tradeoff 

In order to get a clear idea about the Bias-Variance Tradeoff, let us consider the bulls-eye diagram. Here, the central red portion of the target can be considered the location where the model correctly predicts the values. As we move away from the central red circle, the error in the prediction starts to increase. 

Each of the several hits on the target is achieved by repetition of the model building process. Each hit represents the individual realization of the model. As can be seen in the diagram below, the bias and the variance together influence the predictions of the model under different circumstances.

A graphical introduction to Bias-Variance Tradeoff


Another way of looking at the Bias-Variance Tradeoff graphically is to plot the graphical representation for error, bias, and variance versus the complexity of the model. In the graph shown below, the green dotted line represents variance, the blue dotted line represents bias and the red solid line represents the error in the prediction of the concerned model. 

  • Since bias is high for a simpler model and decreases with an increase in model complexity, the line representing bias exponentially decreases as the model complexity increases. 
  • Similarly, Variance is high for a more complex model and is low for simpler models. Hence, the line representing variance increases exponentially as the model complexity increases. 
  • Finally, it can be seen that on either side, the generalization error is quite high. Both high bias and high variance lead to a higher error rate. 
  • The most optimal complexity of the model is right in the middle, where the bias and variance intersect. This part of the graph is shown to produce the least error and is preferred. 
  • Also, as discussed earlier, the model underfits for high-bias situations and overfits for high-variance situations.

 Bias-Variance Tradeoff graph with 2D plots

Mathematical Expression of Bias-Variance Tradeoff 

The expected values is a vector represented by y. The predicted output of the model is denoted by the vector y for input vector x. The relationship between the predicted values and the inputs can be taken as y = f(x) + e, where e is the normally distributed error given by:

Mathematical Expression for Error in Bias-Variance tradeoff

The third term in the above equation, irreducible_error represents the noise term and cannot be fundamentally reduced by any given model. If hypothetically, infinite data is available, it is possible to tune the model to reduce the bias and variance terms to zero but is not possible to do so practically. Hence, there is always a tradeoff between the minimization of bias and variance. 

Detection of Bias and Variance of a model

In model building, it is imperative to have the knowledge to detect if the model is suffering from high bias or high variance. The methods to detect high bias and variance is given below:

  1. Detection of High Bias:
    • The model suffers from a very High Training Error.
    • The Validation error is similar in magnitude to the training error.
    • The model is underfitting.
  2. Detection of High Variance:
    • The model suffers from a very Low Training Error.
    • The Validation error is very high when compared to the training error.
    • The model is overfitting.

A graphical method to Detect a model suffering from High Bias and Variance is shown below: 

Model suffering from High Bias and Variance

The graph shows the change in error rate with respect to model complexity for training and validation error. 

  • The left portion of the graph suffers from High Bias. This can be seen as the training error is quite high along with the validation error. In addition to that, model complexity is quite low. 
  • The right portion of the graph suffers from High Variance. This can be seen as the training error is very low, yet the validation error is very high and starts increasing with increasing model complexity.

A systematic approach to solve a Bias-Variance Problem by Dr. Andrew Ng:

Dr. Andrew Ng proposed a very simple-to-follow step by step architecture to detect and solve a High Bias and High Variance errors in a model. The block diagram is shown below:

Flow Chart to solve High Bias and High Variance Errors in a model

Detection and Solution to High Bias problem - if the training error is high: 

  1. Train longer: High bias means a usually less complex model, and hence it requires more training iterations to learn the relevant patterns. Hence, longer training solves the error sometimes.
  2. Train a more complex model: As mentioned above, high bias is a result of a less than optimal complexity in the model. Hence, to avoid high bias, the existing model can be swapped out with a more complex model. 
  3. Obtain more features: It is often possible that the existing dataset lacks the required essential features for effective pattern recognition. To remedy this problem: 
    • More features can be collected for the existing data.
    • Feature Engineering can be performed on existing features to extract more non-linear features. 
  4. Decrease regularization: Regularization is a process to decrease model complexity by regularizing the inputs at different stages, promote generalization and prevent overfitting in the process. Decreasing regularization allows the model to learn the training dataset better. 
  5. New model architecture: If all of the above-mentioned methods fail to deliver satisfactory results, then it is suggested to try out other new model architectures. 

Detection and Solution to High Variance problem - if a validation error is high: 

  1. Obtain more data: High variance is often caused due to a lack of training data. The model complexity and quantity of training data need to be balanced. A model of higher complexity requires a larger quantity of training data. Hence, if the model is suffering from high variance, more datasets can reduce the variance. 
  2. Decrease number of features: If the dataset consists of too many features for each data-point, the model often starts to suffer from high variance and starts to overfit. Hence, decreasing the number of features is recommended. 
  3. Increase Regularization: As mentioned above, regularization is a process to decrease model complexity. Hence, if the model is suffering from high variance (which is caused by a complex model), then an increase in regularization can decrease the complexity and help to generalize the model better.
  4. New model architecture: Similar to the solution of a model suffering from high bias, if all of the above-mentioned methods fail to deliver satisfactory results, then it is suggested to try out other new model architectures.

Conclusion 

To summarize, Bias and Variance play a major role in the training process of a model. It is necessary to reduce each of these parameters individually to the minimum possible value. However, it should be kept in mind that an effort to decrease one of these parameters beyond a certain limit increases the probability of the other getting increased. This phenomenon is called as the Bias-Variance Tradeoff and is a parameter to consider during model building. 

Animikh

Animikh Aich

Computer Vision Engineer

Animikh Aich is a Deep Learning enthusiast, currently working as a Computer Vision Engineer. His work includes three International Conference publications and several projects based on Computer Vision and Machine Learning.

Join the Discussion

Your email address will not be published. Required fields are marked *

1 comments

satish p 05 Aug 2019

I am interested in this blog, wish to get more information about the machine learning

Knowledgehut Editor 06 Aug 2019

Thanks for reaching out! You can find more on the machine learning at https://www.knowledgehut.com/blog/data-science/what-is-machine-learning

Suggested Blogs

Top Data Analytics Certifications

What is data analytics?In the world of IT, every small bit of data count; even information that looks like pure nonsense has its significance. So, how do we retrieve the significance from this data? This is where Data Science and analytics comes into the picture.  Data Analytics is a process where data is inspected, transformed and interpreted to discover some useful bits of information from all the noise and make decisions accordingly. It forms the entire basis of the social media industry and finds a lot of use in IT, finance, hospitality and even social sciences. The scope in data analytics is nearly endless since all facets of life deal with the storage, processing and interpretation of data.Why data analytics? Data Analytics in this Information Age has nearly endless opportunities since literally everything in this era hinges on the importance of proper processing and data analysis. The insights from any data are crucial for any business. The field of data Analytics has grown more than 50 times from the early 2000s to 2021. Companies specialising in banking, healthcare, fraud detection, e-commerce, telecommunication, infrastructure and risk management hire data analysts and professionals every year in huge numbers.Need for certification:Skills are the first and foremost criteria for a job, but these skills need to be validated and recognised by reputed organisations for them to impress a potential employer. In the field of Data Analytics, it is pretty crucial to show your certifications. Hence, an employer knows you have hands-on experience in the field and can handle the workload of a real-world setting beyond just theoretical knowledge. Once you get a base certification, you can work your way up to higher and higher positions and enjoy lucrative pay packages. Top Data Analytics Certifications Certified Analytics Professional (CAP) Microsoft Certified Azure Data Scientist Associate Cloudera Certified Associate (CCA) Data Analyst Associate Certified Analytics Professional (aCAP) SAS Certified Data Analyst (Using SAS91. Certified Analytics Professional (CAP)A certification from an organisation called INFORMS, CAP is a notoriously rigorous certification and stands out like a star on an applicant's resume. Those who complete this program gain an invaluable credential and are able to distinguish themselves from the competition. It gives a candidate a comprehensive understanding of the analytical process's various fine aspects--from framing hypotheses and analytic problems to the proper methodology, along with acquisition, model building and deployment process with long-term life cycle management. It needs to be renewed after three years.The application process is in itself quite complex, and it also involves signing the CAP Code of Ethics before one is given the certification. The CAP panel reviews each application, and those who pass this review are the only ones who can give the exam.  Prerequisite: A bachelor’s degree with 5 years of professional experience or a master's degree with 3 years of professional experience.  Exam Fee & Format: The base price is $695. For individuals who are members of INFORMS the price is $495. (Source) The pass percentage is 70%. The format is a four option MCQ paper. Salary: $76808 per year (Source) 2. Cloudera Certified Associate (CCA) Data Analyst Cloudera has a well-earned reputation in the IT sector, and its Associate Data analyst certification can help bolster the resume of Business intelligence specialists, system architects, data analysts, database administrators as well as developers. It has a specific focus on SQL developers who aim to show their proficiency on the platform.This certificate validates an applicant's ability to operate in a CDH environment by Cloudera using Impala and Hive tools. One doesn't need to turn to expensive tuitions and academies as Cloudera offers an Analyst Training course with almost the same objectives as the exam, leaving one with a good grasp of the fundamentals.   Prerequisites: basic knowledge of SQL and Linux Command line Exam Fee & Format: The cost of the exam is $295 (Source), The test is a performance-based test containing 8-12 questions to be completed in a proctored environment under 129 minutes.  Expected Salary: You can earn the job title of Cloudera Data Analyst that pays up to $113,286 per year. (Source)3. Associate Certified Analytics Professional (aCAP)aCAP is an entry-level certification for Analytics professionals with lesser experience but effective knowledge, which helps in real-life situations. It is for those candidates who have a master’s degree in a field related to data analytics.  It is one of the few vendor-neutral certifications on the list and must be converted to CAP within 6 years, so it offers a good opportunity for those with a long term path in a Data Analytics career. It also needs to be renewed every three years, like the CAP certification. Like its professional counterpart, aCAP helps a candidate step out in a vendor-neutral manner and drastically increases their professional credibility.  Prerequisite: Master’s degree in any discipline related to data Analytics. Exam Fee: The base price is $300. For individuals who are members of INFORMS the price is $200. (Source). There is an extensive syllabus which covers: i. Business Problem Framing, ii. Analytics Problem Framing, iii. Data, iv. Methodology Selection, v. Model Building, vi. Deployment, vii. Lifecycle Management of the Analytics process, problem-solving, data science and visualisation and much more.4. SAS Certified Data Analyst (Using SAS9)From one of the pioneers in IT and Statistics - the SAS Institute of Data Management - a SAS Certified Data Scientist can gain insights and analyse various aspects of data from businesses using tools like the SAS software and other open-source methodology. It also validates competency in using complex machine learning models and inferring results to interpret future business strategy and release models using the SAS environment. SAS Academy for Data Science is a viable institute for those who want to receive proper training for the exam and use this as a basis for their career.  Prerequisites: To earn this credential, one needs to pass 5 exams, two from the SAS Certified Big Data Professional credential and three exams from the SAS Certified Advanced Analytics Professional Credential. Exam Fee: The cost for each exam is $180. (Source) An exception is Predictive Modelling using the SAS Enterprise Miner, costing $250, This exam can be taken in the English language. One can join the SAS Academy for Data Science and also take a practice exam beforehand. Salary: You can get a job as a SAS Data Analyst that pays up to $90,000 per year! (Source) 5. IBM Data Science Professional CertificateWhenever someone studies the history of a computer, IBM (International Business Machines) is the first brand that comes up. IBM is still alive and kicking, now having forayed into and becoming a major player in the Big Data segment. The IBM Data Science Professional certificate is one of the beginner-level certificates if you want to sink your hands into the world of data analysis. It shows a candidate's skills in various topics pertaining to data sciences, including various open-source tools, Python databases, SWL, data visualisation, and data methodologies.  One needs to complete nine courses to earn the certificate. It takes around three months if one works twelve hours per week. It also involves the completion of various hands-on assignments and building a portfolio. A candidate earns the Professional certificate from Coursera and a badge from IBM that recognises a candidate's proficiency in the area. Prerequisites: It is the optimal course for freshers since it requires no requisite programming knowledge or proficiency in Analytics. Exam Fee: It costs $39 per month (Source) to access the course materials and the certificate. The course is handled by the Coursera organisation. Expected Salary: This certification can earn you the title of IBM Data Scientist and help you earn a salary of $134,846 per annum. (Source) 6. Microsoft Certified Azure Data Scientist AssociateIt's one of the most well-known certifications for newcomers to step into the field of Big Data and Data analytics. This credential is offered by the leader in the industry, Microsoft Azure. This credential validates a candidate's ability to work with Microsoft Azure developing environment and proficiency in analysing big data, preparing data for the modelling process, and then progressing to designing models. One advantage of this credential is that it has no expiry date and does not need renewal; it also authorises the candidate’s extensive knowledge in predictive Analytics. Prerequisites: knowledge and experience in data science and using Azure Machine Learning and Azure Databricks. Exam Fee: It costs $165 to (Source) register for the exam. One advantage is that there is no need to attend proxy institutions to prepare for this exam, as Microsoft offers free training materials as well as an instructor-led course that is paid. There is a comprehensive collection of resources available to a candidate. Expected Salary: The job title typically offered is Microsoft Data Scientist and it typically fetches a yearly pay of $130,993.(Source) Why be a Data Analytics professional? For those already working in the field of data, being a Data Analyst is one of the most viable options. The salary of a data analyst ranges from $65,000 to $85,000 depending on number of years of experience. This lucrative salary makes it worth the investment to get a certification and advance your skills to the next level so that you can work for multinational companies by interpreting and organising data and using this analysis to accelerate businesses. These certificates demonstrate that you have the required knowledge needed to operate data models of the volumes needed by big organizations. 1. Demand is more than supply With the advent of the Information Age, there has been a huge boom in companies that either entirely or partially deal with IT. For many companies IT forms the core of their business. Every business has to deal with data, and it is crucial to get accurate insights from this data and use it to further business interests and expand profits. The interpretation of data also aims to guide them in the future to make the best business decisions.  Complex business intelligence algorithms are in place these days. They need trained professionals to operate them; since this field is relatively new, there is a shortage of experts. Thus, there are vacancies for data analyst positions with lucrative pay if one is qualified enough.2. Good pay with benefitsA data analyst is an extremely lucrative profession, with an average base pay of $71,909 (Source), employee benefits, a good work-home balance, and other perks. It has been consistently rated as being among the hottest careers of the decade and allows professionals to have a long and satisfying career.   Companies Hiring Certified Data Analytics Professionals Oracle A California based brand, Oracle is a software company that is most famous for its data solutions. With over 130000 employees and a revenue of 39 billion, it is surely one of the bigger players in Data Analytics.  MicroStrategy   Unlike its name, this company is anything but micro, with more than 400 million worth of revenue. It provides a suite of analytical products along with business mobility solutions. It is a key player in the mobile space, working natively with Android and iOS.   SAS   One of the companies in the list which provides certifications and is also without a doubt one of the largest names in the field of Big Data, machine learning and Data Analytics, is SAS. The name SAS is derived from Statistical Analysis System. This company is trusted and has a solid reputation. It is also behind the SAS Institute for Data Science. Hence, SAS is the organisation you would want to go to if you're aiming for a long-term career in data science.    Conclusion To conclude, big data and data Analytics are a field of endless opportunities. By investing in the right credential, one can pave the way to a viable and lucrative career path. Beware though, there are lots of companies that provide certifications, but only recognised and reputed credentials will give you the opportunities you are seeking. Hiring companies look for these certifications as a mark of authenticity of your hands-on experience and the amount of work you can handle effectively. Therefore, the credential you choose for yourself plays a vital role in the career you can have in the field of Data analytics.  Happy learning!    
5631
Top Data Analytics Certifications

What is data analytics?In the world of IT, every s... Read More

Why Should You Start a Career in Machine Learning?

If you are even remotely interested in technology you would have heard of machine learning. In fact machine learning is now a buzzword and there are dozens of articles and research papers dedicated to it.  Machine learning is a technique which makes the machine learn from past experiences. Complex domain problems can be resolved quickly and efficiently using Machine Learning techniques.  We are living in an age where huge amounts of data are produced every second. This explosion of data has led to creation of machine learning models which can be used to analyse data and to benefit businesses.  This article tries to answer a few important concepts related to Machine Learning and informs you about the career path in this prestigious and important domain.What is Machine Learning?So, here’s your introduction to Machine Learning. This term was coined in the year 1997. “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at the tasks improves with the experiences.”, as defined in the book on ML written by Mitchell in 1997. The difference between a traditional programming and programming using Machine Learning is depicted here, the first Approach (a) is a traditional approach, and second approach (b) is a Machine Learning based approach.Machine Learning encompasses the techniques in AI which allow the system to learn automatically looking at the data available. While learning, the system tries to improve the experience without making any explicit efforts in programming. Any machine learning application follows the following steps broadlySelecting the training datasetAs the definition indicates, machine learning algorithms require past experience, that is data, for learning. So, selection of appropriate data is the key for any machine learning application.Preparing the dataset by preprocessing the dataOnce the decision about the data is made, it needs to be prepared for use. Machine learning algorithms are very susceptible to the small changes in data. To get the right insights, data must be preprocessed which includes data cleaning and data transformation.  Exploring the basic statistics and properties of dataTo understand what the data wishes to convey, the data engineer or Machine Learning engineer needs to understand the properties of data in detail. These details are understood by studying the statistical properties of data. Visualization is an important process to understand the data in detail.Selecting the appropriate algorithm to apply on the datasetOnce the data is ready and understood in detail, then appropriate Machine Learning algorithms or models are selected. The choice of algorithm depends on characteristics of data as well as type of task to be performed on the data. The choice also depends on what kind of output is required from the data.Checking the performance and fine-tuning the parameters of the algorithmThe model or algorithm chosen is fine-tuned to get improved performance. If multiple models are applied, then they are weighed against the performance. The final algorithm is again fine-tuned to get appropriate output and performance.Why Pursue a Career in Machine Learning in 2021?A recent survey has estimated that the jobs in AI and ML have grown by more than 300%. Even before the pandemic struck, Machine Learning skills were in high demand and the demand is expected to increase two-fold in the near future.A career in machine learning gives you the opportunity to make significant contributions in AI, the future of technology. All the big and small businesses are adopting Machine Learning models to improve their bottom-line margins and return on investment.  The use of Machine Learning has gone beyond just technology and it is now used in diverse industries including healthcare, automobile, manufacturing, government and more. This has greatly enhanced the value of Machine Learning experts who can earn an average salary of $112,000.  Huge numbers of jobs are expected to be created in the coming years.  Here are a few reasons why one should pursue a career in Machine Learning:The global machine learning market is expected to touch $20.83B in 2024, according to Forbes.  We are living in a digital age and this explosion of data has made the use of machine learning models a necessity. Machine Learning is the only way to extract meaning out of data and businesses need Machine Learning engineers to analyze huge data and gain insights from them to improve their businesses.If you like numbers, if you like research, if you like to read and test and if you have a passion to analyse, then machine learning is the career for you. Learning the right tools and programming languages will help you use machine learning to provide appropriate solutions to complex problems, overcome challenges and grow the business.Machine Learning is a great career option for those interested in computer science and mathematics. They can come up with new Machine Learning algorithms and techniques to cater to the needs of various business domains.As explained above, a career in machine learning is both rewarding and lucrative. There are huge number of opportunities available if you have the right expertise and knowledge. On an average, Machine Learning engineers get higher salaries, than other software developers.Years of experience in the Machine Learning domain, helps you break into data scientist roles, which is not just among the hottest careers of our generation but also a highly respected and lucrative career. Right skills in the right business domain helps you progress and make a mark for yourself in your organization. For example, if you have expertise in pharmaceutical industries and experience working in Machine learning, then you may land job roles as a data scientist consultant in big pharmaceutical companies.Statistics on Machine learning growth and the industries that use MLAccording to a research paper in AI Multiple (https://research.aimultiple.com/ml-stats/), the Machine Learning market will grow to 9 Billion USD by the end of 2022. There are various areas where Machine Learning models and solutions are getting deployed, and businesses see an overall increase of 44% investments in this area. North America is one of the leading regions in the adoption of Machine Learning followed by Asia.The Global Machine Learning market will grow by 42% which is evident from the following graph. Image sourceThere is a huge demand for Machine Learning modelling because of the large use of Cloud Based Applications and Services. The pandemic has changed the face of businesses, making them heavily dependent on Cloud and AI based services. Google, IBM, and Amazon are just some of the companies that have invested heavily in AI and Machine Learning based application development, to provide robust solutions for problems faced by small to large scale businesses. Machine Learning and Cloud based solutions are scalable and secure for all types of business.ML analyses and interprets data patterns, computing and developing algorithms for various business purposes.Advantages of Machine Learning courseNow that we have established the advantages of perusing a career in Machine Learning, let’s understand from where to start our machine learning journey. The best option would be to start with a Machine Learning course. There are various platforms which offer popular Machine Learning courses. One can always start with an online course which is both effective and safe in these COVID times.These courses start with an introduction to Machine Learning and then slowly help you to build your skills in the domain. Many courses even start with the basics of programming languages such as Python, which are important for building Machine Learning models. Courses from reputed institutions will hand hold you through the basics. Once the basics are clear, you may switch to an offline course and get the required certification.Online certifications have the same value as offline classes. They are a great way to clear your doubts and get personalized help to grow your knowledge. These courses can be completed along with your normal job or education, as most are self-paced and can be taken at a time of your convenience. There are plenty of online blogs and articles to aid you in completion of your certification.Machine Learning courses include many real time case studies which help you in understanding the basics and application aspects. Learning and applying are both important and are covered in good Machine Learning Courses. So, do your research and pick an online tutorial that is from a reputable institute.What Does the Career Path in Machine Learning Look Like?One can start their career in Machine Learning domain as a developer or application programmer. But the acquisition of the right skills and experience can lead you to various career paths. Following are some of the career options in Machine Learning (not an exhaustive list):Data ScientistA data scientist is a person with rich experience in a particular business field. A person who has a knowledge of domain, as well as machine learning modelling, is a data scientist. Data Scientists’ job is to study the data carefully and suggest accurate models to improve the business.AI and Machine Learning EngineerAn AI engineer is responsible for choosing the proper Machine Learning Algorithm based on natural language processing and neural network. They are responsible for applying it in AI applications like personalized advertising.  A Machine Learning Engineer is responsible for creating the appropriate models for improvement of the businessData EngineerA Data Engineer, as the name suggests, is responsible to collect data and make it ready for the application of Machine Learning models. Identification of the right data and making it ready for extraction of further insights is the main work of a data engineer.Business AnalystA person who studies the business and analyzes the data to get insights from it is a Business Analyst. He or she is responsible for extracting the insights from the data at hand.Business Intelligence (BI) DeveloperA BI developer uses Machine Learning and Data Analytics techniques to work on a large amount of data. Proper representation of data to suit business decisions, using the latest tools for creation of intuitive dashboards is the role of a BI developer.  Human Machine Interface learning engineerCreating tools using machine learning techniques to ease the human machine interaction or automate decisions, is the role of a Human Machine Interface learning engineer. This person helps in generating choices for users to ease their work.Natural Language Processing (NLP) engineer or developerAs the name suggests, this person develops various techniques to process Natural Language constructs. Building applications or systems using machine learning techniques to build Natural Language based applications is their main task. They create multilingual Chatbots for use in websites and other applications.Why are Machine Learning Roles so popular?As mentioned above, the market growth of AI and ML has increased tremendously over the past years. The Machine Learning Techniques are applied in every domain including marketing, sales, product recommendations, brand retention, creating advertising, understanding the sentiments of customer, security, banking and more. Machine learning algorithms are also used in emails to ease the users work. This says a lot, and proves that a career in Machine Learning is in high demand as all businesses are incorporating various machine learning techniques and are improving their business.One can harness this popularity by skilling up with Machine Learning skills. Machine Learning models are now being used by every company, irrespective of their size--small or big, to get insights on their data and use these insights to improve the business. As every company wishes to grow faster, they are deploying more machine learning engineers to get their work done on time. Also, the migration of businesses to Cloud services for better security and scalability, has increased their requirement for more Machine Learning algorithms and models to cater to their needs.Introducing the Machine learning techniques and solutions has brought huge returns for businesses.  Machine Learning solution providers like Google, IBM, Microsoft etc. are investing in human resources for development of Machine Learning models and algorithms. The tools developed by them are popularly used by businesses to get early returns. It has been observed that there is significant increase in patents in Machine Learning domains since the past few years, indicating the quantum of work happening in this domain.Machine Learning SkillsLet’s visit a few important skills one must acquire to work in the domain of Machine Learning.Programming languagesKnowledge of programming is very important for a career in Machine Learning. Languages like Python and R are popularly used to develop applications using Machine Learning models and algorithms. Python, being the simplest and most flexible language, is very popular for AI and Machine Learning applications. These languages provide rich support of libraries for implementation of Machine Learning Algorithms. A person who is good in programming can work very efficiently in this domain.Mathematics and StatisticsThe base for Machine Learning is mathematics and statistics. Statistics applied to data help in understanding it in micro detail. Many machine learning models are based on the probability theory and require knowledge of linear algebra, transformations etc. A good understanding of statistics and probability increases the early adoption to Machine Learning domain.Analytical toolsA plethora of analytical tools are available where machine learning models are already implemented and made available for use. Also, these tools are very good for visualization purposes. Tools like IBM Cognos, PowerBI, Tableue etc are important to pursue a career as a  Machine Learning engineer.Machine Learning Algorithms and librariesTo become a master in this domain, one must master the libraries which are provided with various programming languages. The basic understanding of how machine learning algorithms work and are implemented is crucial.Data Modelling for Machine Learning based systemsData lies at the core of any Machine Learning application. So, modelling the data to suit the application of Machine Learning algorithms is an important task. Data modelling experts are the heart of development teams that develop machine learning based systems. SQL based solutions like Oracle, SQL Server, and NoSQL solutions are important for modelling data required for Machine Learning applications. MongoDB, DynamoDB, Riak are some important NOSQL based solutions available to process unstructured data for Machine Learning applications.Other than these skills, there are two other skills that may prove to be beneficial for those planning on a career in the Machine Learning domain:Natural Language processing techniquesFor E-commerce sites, customer feedback is very important and crucial in determining the roadmap of future products. Many customers give reviews for the products that they have used or give suggestions for improvement. These feedbacks and opinions are analyzed to gain more insights about the customers buying habits as well as about the products. This is part of natural language processing using Machine Learning. The likes of Google, Facebook, Twitter are developing machine learning algorithms for Natural Language Processing and are constantly working on improving their solutions. Knowledge of basics of Natural Language Processing techniques and libraries is must in the domain of Machine Learning.Image ProcessingKnowledge of Image and Video processing is very crucial when a solution is required to be developed in the area of security, weather forecasting, crop prediction etc. Machine Learning based solutions are very effective in these domains. Tools like Matlab, Octave, OpenCV are some important tools available to develop Machine Learning based solutions which require image or video processing.ConclusionMachine Learning is a technique to automate the tasks based on past experiences. This is among the most lucrative career choices right now and will continue to remain so in the future. Job opportunities are increasing day by day in this domain. Acquiring the right skills by opting for a proper Machine Learning course is important to grow in this domain. You can have an impressive career trajectory as a machine learning expert, provided you have the right skills and expertise.
5684
Why Should You Start a Career in Machine Learning?

If you are even remotely interested in technology ... Read More

Types of Probability Distributions Every Data Science Expert Should know

Data Science has become one of the most popular interdisciplinary fields. It uses scientific approaches, methods, algorithms, and operations to obtain facts and insights from unstructured, semi-structured, and structured datasets. Organizations use these collected facts and insights for efficient production, business growth, and to predict user requirements. Probability distribution plays a significant role in performing data analysis equipping a dataset for training a model. In this article, you will learn about the types of Probability Distribution, random variables, types of discrete distributions, and continuous distribution.  What is Probability Distribution? A Probability Distribution is a statistical method that determines all the probable values and possibilities that a random variable can deliver from a particular range. This range of values will have a lower bound and an upper bound, which we call the minimum and the maximum possible values.  Various factors on which plotting of a value depends are standard deviation, mean (or average), skewness, and kurtosis. All of these play a significant role in Data science as well. We can use probability distribution in physics, engineering, finance, data analysis, machine learning, etc. Significance of Probability distributions in Data Science In a way, most of the data science and machine learning operations are dependent on several assumptions about the probability of your data. Probability distribution allows a skilled data analyst to recognize and comprehend patterns from large data sets; that is, otherwise, entirely random variables and values. Thus, it makes probability distribution a toolkit based on which we can summarize a large data set. The density function and distribution techniques can also help in plotting data, thus supporting data analysts to visualize data and extract meaning. General Properties of Probability Distributions Probability distribution determines the likelihood of any outcome. The mathematical expression takes a specific value of x and shows the possibility of a random variable with p(x). Some general properties of the probability distribution are – The total of all probabilities for any possible value becomes equal to 1. In a probability distribution, the possibility of finding any specific value or a range of values must lie between 0 and 1. Probability distributions tell us the dispersal of the values from the random variable. Consequently, the type of variable also helps determine the type of probability distribution.Common Data Types Before jumping directly into explaining the different probability distributions, let us first understand the different types of probability distributions or the main categories of the probability distribution. Data analysts and data engineers have to deal with a broad spectrum of data, such as text, numerical, image, audio, voice, and many more. Each of these have a specific means to be represented and analyzed. Data in a probability distribution can either be discrete or continuous. Numerical data especially takes one of the two forms. Discrete data: They take specific values where the outcome of the data remains fixed. Like, for example, the consequence of rolling two dice or the number of overs in a T-20 match. In the first case, the result lies between 2 and 12. In the second case, the event will be less than 20. Different types of discrete distributions that use discrete data are: Binomial Distribution Hypergeometric Distribution Geometric Distribution Poisson Distribution Negative Binomial Distribution Multinomial Distribution  Continuous data: It can obtain any value irrespective of bound or limit. Example: weight, height, any trigonometric value, age, etc. Different types of continuous distributions that use continuous data are: Beta distribution Cauchy distribution Exponential distribution Gamma distribution Logistic distribution Weibull distribution Types of Probability Distribution explained Here are some of the popular types of Probability distributions used by data science professionals. (Try all the code using Jupyter Notebook) Normal Distribution: It is also known as Gaussian distribution. It is one of the simplest types of continuous distribution. This probability distribution is symmetrical around its mean value. It also shows that data at close proximity of the mean is frequently occurring, compared to data that is away from it. Here, mean = 0, variance = finite valueHere, you can see 0 at the center is the Normal Distribution for different mean and variance values. Here is a code example showing the use of Normal Distribution: from scipy.stats import norm  import matplotlib.pyplot as mpl  import numpy as np  def normalDist() -> None:      fig, ax = mpl.subplots(1, 1)      mean, var, skew, kurt = norm.stats(moments = 'mvsk')      x = np.linspace(norm.ppf(0.01),  norm.ppf(0.99), 100)      ax.plot(x, norm.pdf(x),          'r-', lw = 5, alpha = 0.6, label = 'norm pdf')      ax.plot(x, norm.cdf(x),          'b-', lw = 5, alpha = 0.6, label = 'norm cdf')      vals = norm.ppf([0.001, 0.5, 0.999])      np.allclose([0.001, 0.5, 0.999], norm.cdf(vals))      r = norm.rvs(size = 1000)      ax.hist(r, normed = True, histtype = 'stepfilled', alpha = 0.2)      ax.legend(loc = 'best', frameon = False)      mpl.show()  normalDist() Output: Bernoulli Distribution: It is the simplest type of probability distribution. It is a particular case of Binomial distribution, where n=1. It means a binomial distribution takes 'n' number of trials, where n > 1 whereas, the Bernoulli distribution takes only a single trial.   Probability Mass Function of a Bernoulli’s Distribution is:  where p = probability of success and q = probability of failureHere is a code example showing the use of Bernoulli Distribution: from scipy.stats import bernoulli  import seaborn as sb    def bernoulliDist():      data_bern = bernoulli.rvs(size=1200, p = 0.7)      ax = sb.distplot(          data_bern,           kde = True,           color = 'g',           hist_kws = {'alpha' : 1},          kde_kws = {'color': 'y', 'lw': 3, 'label': 'KDE'})      ax.set(xlabel = 'Bernouli Values', ylabel = 'Frequency Distribution')  bernoulliDist() Output:Continuous Uniform Distribution: In this type of continuous distribution, all outcomes are equally possible; each variable gets the same probability of hit as a consequence. This symmetric probabilistic distribution has random variables at an equal interval, with the probability of 1/(b-a). Here is a code example showing the use of Uniform Distribution: from numpy import random  import matplotlib.pyplot as mpl  import seaborn as sb  def uniformDist():      sb.distplot(random.uniform(size = 1200), hist = True)      mpl.show()  uniformDist() Output: Log-Normal Distribution: A Log-Normal distribution is another type of continuous distribution of logarithmic values that form a normal distribution. We can transform a log-normal distribution into a normal distribution. Here is a code example showing the use of Log-Normal Distribution import matplotlib.pyplot as mpl  def lognormalDist():      muu, sig = 3, 1      s = np.random.lognormal(muu, sig, 1000)      cnt, bins, ignored = mpl.hist(s, 80, normed = True, align ='mid', color = 'y')      x = np.linspace(min(bins), max(bins), 10000)      calc = (np.exp( -(np.log(x) - muu) **2 / (2 * sig**2))             / (x * sig * np.sqrt(2 * np.pi)))      mpl.plot(x, calc, linewidth = 2.5, color = 'g')      mpl.axis('tight')      mpl.show()  lognormalDist() Output: Pareto Distribution: It is one of the most critical types of continuous distribution. The Pareto Distribution is a skewed statistical distribution that uses power-law to describe quality control, scientific, social, geophysical, actuarial, and many other types of observable phenomena. The distribution shows slow or heavy-decaying tails in the plot, where much of the data reside at its extreme end. Here is a code example showing the use of Pareto Distribution – import numpy as np  from matplotlib import pyplot as plt  from scipy.stats import pareto  def paretoDist():      xm = 1.5        alp = [2, 4, 6]       x = np.linspace(0, 4, 800)      output = np.array([pareto.pdf(x, scale = xm, b = a) for a in alp])      plt.plot(x, output.T)      plt.show()  paretoDist() Output:Exponential Distribution: It is a type of continuous distribution that determines the time elapsed between events (in a Poisson process). Let’s suppose, that you have the Poisson distribution model that holds the number of events happening in a given period. We can model the time between each birth using an exponential distribution.Here is a code example showing the use of Pareto Distribution – from numpy import random  import matplotlib.pyplot as mpl  import seaborn as sb  def expDist():      sb.distplot(random.exponential(size = 1200), hist = True)      mpl.show()   expDist()Output:Types of the Discrete probability distribution – There are various types of Discrete Probability Distribution a Data science aspirant should know about. Some of them are – Binomial Distribution: It is one of the popular discrete distributions that determine the probability of x success in the 'n' trial. We can use Binomial distribution in situations where we want to extract the probability of SUCCESS or FAILURE from an experiment or survey which went through multiple repetitions. A Binomial distribution holds a fixed number of trials. Also, a binomial event should be independent, and the probability of obtaining failure or success should remain the same. Here is a code example showing the use of Binomial Distribution – from numpy import random  import matplotlib.pyplot as mpl  import seaborn as sb    def binomialDist():      sb.distplot(random.normal(loc = 50, scale = 6, size = 1200), hist = False, label = 'normal')      sb.distplot(random.binomial(n = 100, p = 0.6, size = 1200), hist = False, label = 'binomial')      plt.show()    binomialDist() Output:Geometric Distribution: The geometric probability distribution is one of the crucial types of continuous distributions that determine the probability of any event having likelihood ‘p’ and will happen (occur) after 'n' number of Bernoulli trials. Here 'n' is a discrete random variable. In this distribution, the experiment goes on until we encounter either a success or a failure. The experiment does not depend on the number of trials. Here is a code example showing the use of Geometric Distribution – import matplotlib.pyplot as mpl  def probability_to_occur_at(attempt, probability):      return (1-p)**(attempt - 1) * probability  p = 0.3  attempt = 4  attempts_to_show = range(21)[1:]  print('Possibility that this event will occur on the 7th try: ', probability_to_occur_at(attempt, p))  mpl.xlabel('Number of Trials')  mpl.ylabel('Probability of the Event')  barlist = mpl.bar(attempts_to_show, height=[probability_to_occur_at(x, p) for x in attempts_to_show], tick_label=attempts_to_show)  barlist[attempt].set_color('g')  mpl.show() Output:Poisson Distribution: Poisson distribution is one of the popular types of discrete distribution that shows how many times an event has the possibility of occurrence in a specific set of time. We can obtain this by limiting the Bernoulli distribution from 0 to infinity. Data analysts often use the Poisson distributions to comprehend independent events occurring at a steady rate in a given time interval. Here is a code example showing the use of Poisson Distribution from scipy.stats import poisson  import seaborn as sb  import numpy as np  import matplotlib.pyplot as mpl  def poissonDist():       mpl.figure(figsize = (10, 10))      data_binom = poisson.rvs(mu = 3, size = 5000)      ax = sb.distplot(data_binom, kde=True, color = 'g',                       bins=np.arange(data_binom.min(), data_binom.max() + 1),                       kde_kws={'color': 'y', 'lw': 4, 'label': 'KDE'})      ax.set(xlabel = 'Poisson Distribution', ylabel='Data Frequency')      mpl.show()      poissonDist() Output:Multinomial Distribution: A multinomial distribution is another popular type of discrete probability distribution that calculates the outcome of an event having two or more variables. The term multi means more than one. The Binomial distribution is a particular type of multinomial distribution with two possible outcomes - true/false or heads/tails. Here is a code example showing the use of Multinomial Distribution – import numpy as np  import matplotlib.pyplot as mpl  np.random.seed(99)   n = 12                      pvalue = [0.3, 0.46, 0.22]     s = []  p = []     for size in np.logspace(2, 3):      outcomes = np.random.multinomial(n, pvalue, size=int(size))        prob = sum((outcomes[:,0] == 7) & (outcomes[:,1] == 2) & (outcomes[:,2] == 3))/len(outcomes)      p.append(prob)      s.append(int(size))  fig1 = mpl.figure()  mpl.plot(s, p, 'o-')  mpl.plot(s, [0.0248]*len(s), '--r')  mpl.grid()  mpl.xlim(xmin = 0)  mpl.xlabel('Number of Events')  mpl.ylabel('Function p(X = K)') Output:Negative Binomial Distribution: It is also a type of discrete probability distribution for random variables having negative binomial events. It is also known as the Pascal distribution, where the random variable tells us the number of repeated trials produced during a specific number of experiments.  Here is a code example showing the use of Negative Binomial Distribution – import matplotlib.pyplot as mpl   import numpy as np   from scipy.stats import nbinom    x = np.linspace(0, 6, 70)   gr, kr = 0.3, 0.7        g = nbinom.ppf(x, gr, kr)   s = nbinom.pmf(x, gr, kr)   mpl.plot(x, g, "*", x, s, "r--") Output: Apart from these mentioned distribution types, various other types of probability distributions exist that data science professionals can use to extract reliable datasets. In the next topic, we will understand some interconnections & relationships between various types of probability distributions. Relationship between various Probability distributions – It is surprising to see that different types of probability distributions are interconnected. In the chart shown below, the dashed line is for limited connections between two families of distribution, whereas the solid lines show the exact relationship between them in terms of transformation, variable, type, etc. Conclusion  Probability distributions are prevalent among data analysts and data science professionals because of their wide usage. Today, companies and enterprises hire data science professionals in many sectors, namely, computer science, health, insurance, engineering, and even social science, where probability distributions appear as fundamental tools for application. It is essential for Data analysts and data scientists. to know the core of statistics. Probability Distributions perform a requisite role in analyzing data and cooking a dataset to train the algorithms efficiently. If you want to learn more about data science - particularly probability distributions and their uses, check out KnowledgeHut's comprehensive Data science course. 
9687
Types of Probability Distributions Every Data Scie...

Data Science has become one of the most popular in... Read More