Search

Machine learning Filter

Boosting and AdaBoost in Machine Learning

Ensemble learning is a strategy in which a group of models are used to find a solution to a challenging problem, by using a strategy and combining diverse machine learning models into one single predictive model.In general, ensemble methods are mainly used for improving the overall performance accuracy of a model and combine several different models, also known as the base learners, to predict the results, instead of using a single model.In one of the articles related to ensemble learning, we have already discussed about the popular ensemble method, Bootstrap Aggregation. Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. It combines Bootstrapping and Aggregation to form one ensemble model. It basically reduces the variance error and helps to avoid overfitting. In this article we will look into the limitations of bagging and how a boosting algorithm can be used to overcome those limitations. We will also learn about various types of boosting algorithms and implement one of them in Python. Let’s get started.What are the limitations of Bagging?Let us recall the concept of bagging and consider a binary classification problem. We are either classifying an observation as 0 or as 1.In bagging, T bootstrap samples are selected, a classifier is fitted on each of these samples, and the models are trained in parallel. In a Random Forest, decision trees are trained in parallel. Then the results of all classifiers are averaged into a bagging classifier:Formula for a Bagging ClassifierLet us consider 3 classifiers and the result for the classification can either be right or wrong. If we plot the results of the 3 classifiers, there are regions in which the classifiers will be wrong. These regions are represented in red in the figure below.Example case in which Bagging works wellThe above example works pretty well as when one classifier is wrong, the two others are correct. By voting classifier, you can achieve a better accuracy. However, there are cases where Bagging does not work properly, when all classifiers are mistaken to be in the same region.Due to this reason, the intuition behind the discovery of Boosting was the following :instead of training parallel models, one should train models sequentiallyeach model should focus on where the performance of the previous classifier was poorWith this intuition, Boosting algorithm was introduced. Let us understand what Boosting is all about.What is Boosting?Boosting is an ensemble modeling technique which attempts to build a strong classifier from the number of weak classifiers. It is done by building a model using weak models in series. First, a model is built from the training data. Then the second model is built which tries to correct the errors present in the first model. This procedure is continued and models are added until either the complete training data set is predicted correctly or the maximum number of models are added.Boosting being a sequential process, each subsequent model attempts to correct the errors of the previous model. It is focused on reducing the bias unlike bagging. It makes the boosting algorithms prone to overfitting. To avoid overfitting, parameter tuning plays an important role in boosting algorithms, which will be discussed in the later part of this article. Some examples of boosting are XGBoost, GBM, ADABOOST etc..How can boosting identify weak learners?To find weak learners, we apply base learning (ML) algorithms with a different distribution. As each time base learning algorithm is applied, it generates a new weak prediction rule. This is an iterative process. After many iterations, the boosting algorithm combines these weak rules into a single strong prediction rule.How do we choose a different distribution for each round?Step 1: The base learner takes all the distributions and assigns equal weight or attention to each observation.Step 2: If there is any prediction error caused by first base learning algorithm, then we pay higher attention to observations having prediction error. Then, we apply the next base learning algorithm.Step 3: Iterate Step 2 till the limit of base learning algorithm is reached or higher accuracy is achieved.Finally, it combines the outputs from weak learner and creates a strong learner which eventually improves the prediction power of the model. Boosting gives higher focus to examples which are mis-classified or have higher errors by preceding weak rules.How would you classify an email as SPAM or not?Our initial approach would be to identify ‘SPAM’ and ‘NOT SPAM’ emails using the following criteria. If: Email has only one image file (promotional image), It’s a SPAM.Email has only link(s), It’s a SPAM.Email body consists of sentences like “You won a prize money of $ xxxxxx”, It’s a SPAM.Email from our official domain “www.knowledgehut.com” , Not a SPAM.Email from known source, Not a SPAM.Individually, these rules are not powerful enough to classify an email into ‘SPAM’ or ‘NOT SPAM’. Therefore, these rules are called as weak learner.To convert weak learner to strong learner, we’ll combine the prediction of each weak learner using methods like:Using average/ weighted averageConsidering prediction has higher voteExample: Above, we have defined 5 weak learners. Out of these 5, 3 are voted as ‘SPAM’ and 2 are voted as ‘Not a SPAM’. In this case, by default, we’ll consider an email as SPAM because we have higher(3) vote for ‘SPAM’Boosting helps in training a series of low performing algorithms, called weak learners, simply by adjusting the error metric over time. Weak learners are considered to be those algorithms whose error rate is slightly under 50% as illustrated below:Weighted errorsLet us consider data points on a 2D plot. Some of the data points will be well classified, others won’t. The weight attributed to each error when computing the error rate is 1/n where n is the number of data points to classify.Now if we apply some weight to the errors :You might now notice that we give more weight to the data points that are not well classified. An illustration of the weighting process is mentioned below:Example of weighting processIn the end, we want to build a strong classifier that may look like the figure mentioned below:Strong ClassifierTree stumpsThere might be a question in your mind about how many classifiers should one implement in order to ensure it works well. And how is each classifier chosen at each step?Well, Tree stumps defines a 1-level decision tree. At each step, we need to find the best stump, i.e the best data split, which will minimize the overall error. You can see a stump as a test, in which the assumption is that everything that lies on one side belongs to class 1, and everything that lies on the other side belongs to class 0.Many such combinations are possible for a tree stump. Let us look into an example to understand how many combinations we face.3 data points to splitWell there are 12 possible combinations. Let us check how.12 StumpsThere are 12 possible “tests” we could make. The “2” on the side of each separating line simply represents the fact that all points on one side could be points that belong to class 0, or to class 1. Therefore, there are 2 tests embedded in it.At each iteration t, we will choose ht the weak classifier that splits best the data, by reducing the overall error rate the most. Recall that the error rate is a modified error rate version that takes into account what has been introduced before.Finding the best splitThe best split is found by identifying at each iteration t, the best weak classifier ht, generally a decision tree with 1 node and 2 leaves (a stump). Let us consider an example of credit defaulter, i.e whether a person who borrowed money will return or not.Identifying the best splitIn this case, the best split at time t is to stump on the Payment history, since the weighted error resulting from this split is minimum.Simply note that decision tree classifiers like these ones can in practice be deeper than a simple stump. This will be considered as a hyper-parameter.Combining classifiersIn the next step we combine the classifiers into a Sign classifier, and depending on which side of the frontier a point will stand, it is classified as 0 or 1. It can be achieved by:Combining classifiersYou can improve the classifier by adding weights on each classifier, to avoid giving the same importance to the different classifiers.AdaBoostPseudo-codePseudo-codeThe key elements to keep in mind are:Z is a constant whose role is to normalize the weights so that they add up to 1αt is a weight that we apply to each classifierThis algorithm is called AdaBoost or Adaptive Boosting. This is one of the most important algorithms among all boosting methods.ComputationBoosting algorithms are generally fast to train, although we consider every stump possible and compute exponentials recursively.Well, if we choose αt and Z properly, the weights that are supposed to change at each step simplify to:Weights after choice of α and ZTypes of Boosting AlgorithmsUnderlying engine used for boosting algorithms can be anything.  It can be decision stamp, margin-maximizing classification algorithm etc. There are many boosting algorithms which use other types of engines such as: AdaBoost (Adaptive Boosting)Gradient Tree BoostingXGBoostIn this article, we will focus on AdaBoost and Gradient Boosting followed by their respective Python codes and a little bit about XGBoost.Where are Boosted algorithms required?Boosted algorithms are mainly used when there is plenty of data to make a prediction and high predictive power is expected. It is used to reduce bias and variance in supervised learning. It combines multiple weak predictors to build strong predictor.The underlying engine used for boosting algorithms can be anything. For instance, AdaBoost is a boosting done on Decision stump. There are many other boosting algorithms which use other types of engine such as:GentleBoostGradient BoostingLPBoostBrownBoostAdaptive BoostingAdaptive Boosting, or most commonly known AdaBoost, is a Boosting algorithm. This algorithm uses the method to correct its predecessor. It pays more attention to under fitted training instances by the previous model. Thus, at every new predictor the focus is more on the complicated cases more than the others.It fits a sequence of weak learners on different weighted training data. It starts by predicting the original data set and gives equal weight to each observation. If prediction is incorrect using the first learner, then it gives higher weight to observation which have been predicted incorrectly. Being an iterative process, it continues to add learner(s) until a limit is reached in the number of models or accuracy.Mostly, AdaBoost uses decision stamps. But, we can use any machine learning algorithm as base learner if it accepts weight on training data set. We can use AdaBoost algorithms for both classification and regression problems.Let us consider the example of the image mentioned above. In order to build an AdaBoost classifier, consider that as a first base classifier a Decision Tree algorithm is trained to make predictions on our training data. Applying the following methodology of AdaBoost, the weight of the misclassified training instances is increased. Then the second classifier is trained and the updated weights are acknowledged. It repeats the procedure over and over again.At the end of every model prediction we end up boosting the weights of the misclassified instances so that the next model does a better job on them, and so on.This sequential learning technique might sound similar to Gradient Descent, except that instead of tweaking a single predictor’s parameter to minimize the cost function, AdaBoost adds predictors to the ensemble, gradually making it better.One disadvantage of this algorithm is that the model cannot be parallelized since each predictor can only be trained after the previous one has been trained and evaluated.Below are the steps for performing the AdaBoost algorithm:Initially, all observations are given equal weights.A model is built on a subset of data.Using this model, predictions are made on the whole dataset.Errors are calculated by comparing the predictions and actual values.While creating the next model, higher weights are given to the data points which were predicted incorrectly.Weights can be determined using the error value. For instance,the higher the error the more is the weight assigned to the observation.This process is repeated until the error function does not change, or the maximum limit of the number of estimators is reached.Hyperparametersbase_estimators: specify the base type estimator, i.e. the algorithm to be used as base learner.n_estimators: It defines the number of base estimators, where the default is 10 but you can increase it in order to obtain a better performance.learning_rate: same impact as in gradient descent algorithmmax_depth: Maximum depth of the individual estimatorn_jobs: indicates to the system how many processors it is allowed to use. Value of ‘-1’ means there is no limit;random_state: makes the model’s output replicable. It will always produce the same results when you give it a fixed value as well as the same parameters and training data.Now, let us take a quick look at how to use AdaBoost in Python using a simple example on handwritten digit recognition.import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_val_predict from sklearn.model_selection import train_test_split from sklearn.model_selection import learning_curve from sklearn.datasets import load_digitsLet us load the data :dataset = load_digits() X = dataset['data'] y = dataset['target']X contains arrays of length 64 which are simply flattened 8x8 images. The aim of this dataset is to recognize handwritten digits. Let’s take a look at a given handwritten digit:plt.imshow(X[4].reshape(8,8))If we stick to a Decision Tree Classifier of depth 1 (a stump), here’s how to implement AdaBoost classifier:reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1)) scores_ada = cross_val_score(reg_ada, X, y, cv=6) scores_ada.mean()0.2636257855582272And it should head a result of around 26%, which can largely be improved. One of the key parameters is the depth of the sequential decision tree classifiers. How does accuracy improve with depth of the decision trees?score = [] for depth in [1,2,10] : reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=depth)) scores_ada = cross_val_score(reg_ada, X, y, cv=6) score.append(scores_ada.mean()) score[0.2636257855582272, 0.5902852679072207, 0.9527524912410157]And the maximal score is reached for a depth of 10 in this simple example, with an accuracy of 95.3%.Gradient BoostingThis is another very popular Boosting algorithm which works pretty similar to what we’ve seen for AdaBoost. Gradient Boosting works by sequentially adding the previous predictors underfitted predictions to the ensemble, ensuring the errors made previously are corrected.The difference lies in what it does with the underfitted values of its predecessor. Contrary to AdaBoost, which tweaks the instance weights at every interaction, this method tries to fit the new predictor to the residual errors made by the previous predictor.So that you can understand Gradient Boosting it is important to understand Gradient Descent first.Below are the steps for performing the Gradient Boosting algorithm:A model is built on a subset of data.Using this model, predictions are made on the whole dataset.Errors are calculated by comparing the predictions and actual values.A new model is created using the errors calculated as target variable. Our objective is to find the best split to minimize the error.The predictions made by this new model are combined with the predictions of the previous.New errors are calculated using this predicted value and actual value.This process is repeated until the error function does not change, or the maximum limit of the number of estimators is reached.Hyperparametersn_estimators: It controls the number of weak learners.Learning_rate: Controls the contribution of weak learners in the final combination. There is a trade-off between learning_rate and n_estimators.min_samples_split: Minimum number of observation which is required in a node to be considered for splitting. It is used to control overfitting.min_samples_leaf: Minimum samples required in a terminal or leaf node. Lower values should be chosen for imbalanced class problems since the regions in which the minority class will be in the majority will be very small.min_weight_fraction_leaf: similar to the previous but defines a fraction of the total number of observations instead of an integer.max_depth : maximum depth of a tree. Used to control overfitting.max_lead_nodes : maximum number of terminal leaves in a tree. If this is defined max_depth is ignored.max_features : number of features it should consider while searching for the best split.You can tune loss function for better performance.Implementation in PythonYou can find Gradient Boosting function in Scikit-Learn’s library.# for regression from sklearn.ensemble import GradientBoostingRegressor model = GradientBoostingRegressor(n_estimators=3,learning_rate=1) model.fit(X,Y) # for classification from sklearn.ensemble import GradientBoostingClassifier model = GradientBoostingClassifier() model.fit(X,Y)XGBoostXG Boost or Extreme Gradient Boosting is an advanced implementation of the Gradient Boosting. This algorithm has high predictive power and is ten times faster than any other gradient boosting techniques. Moreover, it includes a variety of regularization which reduces overfitting and improves overall performance.AdvantagesIt implements regularization which helps in reducing overfit (Gradient Boosting does not have);It implements parallel processing which is much faster than Gradient Boosting;Allows users to define custom optimization objectives and evaluation criteria adding a whole new dimension to the model;XGBoost has an in-built routine to handle missing values;XGBoost makes splits up to the max_depth specified and then starts pruning the tree backwards and removes splits beyond which there is no positive gain;XGBoost allows a user to run a cross-validation at each iteration of the boosting process and thus it is easy to get the exact optimum number of boosting iterations in a single run.Boosting algorithms represent a different machine learning perspective which is turning a weak model to a stronger one to fix its weaknesses. I hope this article helped you understand how boosting works.We have covered most of the topics related to algorithms in our series of machine learning blogs, click here. If you are inspired by the opportunities provided by machine learning, enroll in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape.Your one-stop-shop for Machine Learning is just a click away. Access our live online training and find easy solutions to all your queries here.

Boosting and AdaBoost in Machine Learning

8781
Boosting and AdaBoost in Machine Learning

Ensemble learning is a strategy in which a group of models are used to find a solution to a challenging problem, by using a strategy and combining diverse machine learning models into one single predictive model.

In general, ensemble methods are mainly used for improving the overall performance accuracy of a model and combine several different models, also known as the base learners, to predict the results, instead of using a single model.

In one of the articles related to ensemble learning, we have already discussed about the popular ensemble method, Bootstrap Aggregation. Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. It combines Bootstrapping and Aggregation to form one ensemble model. It basically reduces the variance error and helps to avoid overfitting. In this article we will look into the limitations of bagging and how a boosting algorithm can be used to overcome those limitations. We will also learn about various types of boosting algorithms and implement one of them in Python. Let’s get started.

What are the limitations of Bagging?

Let us recall the concept of bagging and consider a binary classification problem. We are either classifying an observation as 0 or as 1.

In bagging, T bootstrap samples are selected, a classifier is fitted on each of these samples, and the models are trained in parallel. In a Random Forest, decision trees are trained in parallel. Then the results of all classifiers are averaged into a bagging classifier:

What are the limitations of BaggingFormula for a Bagging ClassifierLet us consider 3 classifiers and the result for the classification can either be right or wrong. If we plot the results of the 3 classifiers, there are regions in which the classifiers will be wrong. These regions are represented in red in the figure below.
Example case in which Bagging works wellExample case in which Bagging works wellThe above example works pretty well as when one classifier is wrong, the two others are correct. By voting classifier, you can achieve a better accuracy. However, there are cases where Bagging does not work properly, when all classifiers are mistaken to be in the same region.

Bagging limitationsDue to this reason, the intuition behind the discovery of Boosting was the following :

  • instead of training parallel models, one should train models sequentially
  • each model should focus on where the performance of the previous classifier was poor

With this intuition, Boosting algorithm was introduced. Let us understand what Boosting is all about.

What is Boosting?

Boosting is an ensemble modeling technique which attempts to build a strong classifier from the number of weak classifiers. It is done by building a model using weak models in series. First, a model is built from the training data. Then the second model is built which tries to correct the errors present in the first model. This procedure is continued and models are added until either the complete training data set is predicted correctly or the maximum number of models are added.

Boosting being a sequential process, each subsequent model attempts to correct the errors of the previous model. It is focused on reducing the bias unlike bagging. It makes the boosting algorithms prone to overfitting. To avoid overfitting, parameter tuning plays an important role in boosting algorithms, which will be discussed in the later part of this article. Some examples of boosting are XGBoost, GBM, ADABOOST etc..

How can boosting identify weak learners?

To find weak learners, we apply base learning (ML) algorithms with a different distribution. As each time base learning algorithm is applied, it generates a new weak prediction rule. This is an iterative process. After many iterations, the boosting algorithm combines these weak rules into a single strong prediction rule.

How do we choose a different distribution for each round?

Step 1: The base learner takes all the distributions and assigns equal weight or attention to each observation.
Step 2: If there is any prediction error caused by first base learning algorithm, then we pay higher attention to observations having prediction error. Then, we apply the next base learning algorithm.
Step 3: Iterate Step 2 till the limit of base learning algorithm is reached or higher accuracy is achieved.

Finally, it combines the outputs from weak learner and creates a strong learner which eventually improves the prediction power of the model. Boosting gives higher focus to examples which are mis-classified or have higher errors by preceding weak rules.

How would you classify an email as SPAM or not?

Our initial approach would be to identify ‘SPAM’ and ‘NOT SPAM’ emails using the following criteria. If: 

  1. Email has only one image file (promotional image), It’s a SPAM.
  2. Email has only link(s), It’s a SPAM.
  3. Email body consists of sentences like “You won a prize money of $ xxxxxx”, It’s a SPAM.
  4. Email from our official domain “www.knowledgehut.com” , Not a SPAM.
  5. Email from known source, Not a SPAM.

Individually, these rules are not powerful enough to classify an email into ‘SPAM’ or ‘NOT SPAM’. Therefore, these rules are called as weak learner.

To convert weak learner to strong learner, we’ll combine the prediction of each weak learner using methods like:

  • Using average/ weighted average
  • Considering prediction has higher vote

Example: Above, we have defined 5 weak learners. Out of these 5, 3 are voted as ‘SPAM’ and 2 are voted as ‘Not a SPAM’. In this case, by default, we’ll consider an email as SPAM because we have higher(3) vote for ‘SPAM’

Boosting helps in training a series of low performing algorithms, called weak learners, simply by adjusting the error metric over time. Weak learners are considered to be those algorithms whose error rate is slightly under 50% as illustrated below:

Classifier error rate

Weighted errors

Let us consider data points on a 2D plot. Some of the data points will be well classified, others won’t. The weight attributed to each error when computing the error rate is 1/n where n is the number of data points to classify.

Now if we apply some weight to the errors :

Weighted errors

You might now notice that we give more weight to the data points that are not well classified. An illustration of the weighting process is mentioned below:

Example of weighting process

Example of weighting process

In the end, we want to build a strong classifier that may look like the figure mentioned below:


Strong Classifier
Strong Classifier
Tree stumps

There might be a question in your mind about how many classifiers should one implement in order to ensure it works well. And how is each classifier chosen at each step?

Well, Tree stumps defines a 1-level decision tree. At each step, we need to find the best stump, i.e the best data split, which will minimize the overall error. You can see a stump as a test, in which the assumption is that everything that lies on one side belongs to class 1, and everything that lies on the other side belongs to class 0.

Many such combinations are possible for a tree stump. Let us look into an example to understand how many combinations we face.

Tree stumps

3 data points to split

Well there are 12 possible combinations. Let us check how.


Tree Stumps

12 Stumps

There are 12 possible “tests” we could make. The “2” on the side of each separating line simply represents the fact that all points on one side could be points that belong to class 0, or to class 1. Therefore, there are 2 tests embedded in it.


At each iteration t, we will choose ht the weak classifier that splits best the data, by reducing the overall error rate the most. Recall that the error rate is a modified error rate version that takes into account what has been introduced before.

Finding the best split

The best split is found by identifying at each iteration t, the best weak classifier ht, generally a decision tree with 1 node and 2 leaves (a stump). Let us consider an example of credit defaulter, i.e whether a person who borrowed money will return or not.

Finding the best split

Identifying the best split

In this case, the best split at time t is to stump on the Payment history, since the weighted error resulting from this split is minimum.


Simply note that decision tree classifiers like these ones can in practice be deeper than a simple stump. This will be considered as a hyper-parameter.

Combining classifiers

In the next step we combine the classifiers into a Sign classifier, and depending on which side of the frontier a point will stand, it is classified as 0 or 1. It can be achieved by:

Combining classifiers

Combining classifiers

You can improve the classifier by adding weights on each classifier, to avoid giving the same importance to the different classifiers.


AdaBoost

AdaBoost

Pseudo-codePseudo-code

Pseudo-code
The key elements to keep in mind are:


  • Z is a constant whose role is to normalize the weights so that they add up to 1
  • αt is a weight that we apply to each classifier

This algorithm is called AdaBoost or Adaptive Boosting. This is one of the most important algorithms among all boosting methods.

Computation

Boosting algorithms are generally fast to train, although we consider every stump possible and compute exponentials recursively.

Well, if we choose αt and Z properly, the weights that are supposed to change at each step simplify to:

Weights after choice of α and Z

Weights after choice of α and Z


Types of Boosting Algorithms

Underlying engine used for boosting algorithms can be anything.  It can be decision stamp, margin-maximizing classification algorithm etc. There are many boosting algorithms which use other types of engines such as: 

  1. AdaBoost (Adaptive Boosting)
  2. Gradient Tree Boosting
  3. XGBoost

In this article, we will focus on AdaBoost and Gradient Boosting followed by their respective Python codes and a little bit about XGBoost.

Where are Boosted algorithms required?

Boosted algorithms are mainly used when there is plenty of data to make a prediction and high predictive power is expected. It is used to reduce bias and variance in supervised learning. It combines multiple weak predictors to build strong predictor.

The underlying engine used for boosting algorithms can be anything. For instance, AdaBoost is a boosting done on Decision stump. There are many other boosting algorithms which use other types of engine such as:

  1. GentleBoost
  2. Gradient Boosting
  3. LPBoost
  4. BrownBoost

Adaptive Boosting

Adaptive Boosting, or most commonly known AdaBoost, is a Boosting algorithm. This algorithm uses the method to correct its predecessor. It pays more attention to under fitted training instances by the previous model. Thus, at every new predictor the focus is more on the complicated cases more than the others.

It fits a sequence of weak learners on different weighted training data. It starts by predicting the original data set and gives equal weight to each observation. If prediction is incorrect using the first learner, then it gives higher weight to observation which have been predicted incorrectly. Being an iterative process, it continues to add learner(s) until a limit is reached in the number of models or accuracy.

Mostly, AdaBoost uses decision stamps. But, we can use any machine learning algorithm as base learner if it accepts weight on training data set. We can use AdaBoost algorithms for both classification and regression problems.

Let us consider the example of the image mentioned above. In order to build an AdaBoost classifier, consider that as a first base classifier a Decision Tree algorithm is trained to make predictions on our training data. Applying the following methodology of AdaBoost, the weight of the misclassified training instances is increased. Then the second classifier is trained and the updated weights are acknowledged. It repeats the procedure over and over again.

At the end of every model prediction we end up boosting the weights of the misclassified instances so that the next model does a better job on them, and so on.

This sequential learning technique might sound similar to Gradient Descent, except that instead of tweaking a single predictor’s parameter to minimize the cost function, AdaBoost adds predictors to the ensemble, gradually making it better.

One disadvantage of this algorithm is that the model cannot be parallelized since each predictor can only be trained after the previous one has been trained and evaluated.

Below are the steps for performing the AdaBoost algorithm:

  1. Initially, all observations are given equal weights.
  2. A model is built on a subset of data.
  3. Using this model, predictions are made on the whole dataset.
  4. Errors are calculated by comparing the predictions and actual values.
  5. While creating the next model, higher weights are given to the data points which were predicted incorrectly.
  6. Weights can be determined using the error value. For instance,the higher the error the more is the weight assigned to the observation.
  7. This process is repeated until the error function does not change, or the maximum limit of the number of estimators is reached.

Hyperparameters

base_estimators: specify the base type estimator, i.e. the algorithm to be used as base learner.

n_estimators: It defines the number of base estimators, where the default is 10 but you can increase it in order to obtain a better performance.

learning_rate: same impact as in gradient descent algorithm

max_depth: Maximum depth of the individual estimator

n_jobsindicates to the system how many processors it is allowed to use. Value of ‘-1’ means there is no limit;

random_state: makes the model’s output replicable. It will always produce the same results when you give it a fixed value as well as the same parameters and training data.

Now, let us take a quick look at how to use AdaBoost in Python using a simple example on handwritten digit recognition.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import train_test_split
from sklearn.model_selection import learning_curve
from sklearn.datasets import load_digits

Let us load the data :

dataset = load_digits()
X = dataset['data']
y = dataset['target']

X contains arrays of length 64 which are simply flattened 8x8 images. The aim of this dataset is to recognize handwritten digits. Let’s take a look at a given handwritten digit:

plt.imshow(X[4].reshape(8,8))

Hyperparameters

If we stick to a Decision Tree Classifier of depth 1 (a stump), here’s how to implement AdaBoost classifier:

reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1))
scores_ada = cross_val_score(reg_ada, X, y, cv=6)
scores_ada.mean()
0.2636257855582272

And it should head a result of around 26%, which can largely be improved. One of the key parameters is the depth of the sequential decision tree classifiers. How does accuracy improve with depth of the decision trees?

score = []
for depth in [1,2,10] :
reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=depth))
scores_ada = cross_val_score(reg_ada, X, y, cv=6)
score.append(scores_ada.mean())
score
[0.2636257855582272, 0.5902852679072207, 0.9527524912410157]

And the maximal score is reached for a depth of 10 in this simple example, with an accuracy of 95.3%.

Gradient Boosting

This is another very popular Boosting algorithm which works pretty similar to what we’ve seen for AdaBoost. Gradient Boosting works by sequentially adding the previous predictors underfitted predictions to the ensemble, ensuring the errors made previously are corrected.

The difference lies in what it does with the underfitted values of its predecessor. Contrary to AdaBoost, which tweaks the instance weights at every interaction, this method tries to fit the new predictor to the residual errors made by the previous predictor.

So that you can understand Gradient Boosting it is important to understand Gradient Descent first.

Below are the steps for performing the Gradient Boosting algorithm:

  1. A model is built on a subset of data.
  2. Using this model, predictions are made on the whole dataset.
  3. Errors are calculated by comparing the predictions and actual values.
  4. A new model is created using the errors calculated as target variable. Our objective is to find the best split to minimize the error.
  5. The predictions made by this new model are combined with the predictions of the previous.
  6. New errors are calculated using this predicted value and actual value.
  7. This process is repeated until the error function does not change, or the maximum limit of the number of estimators is reached.

Hyperparameters

n_estimators: It controls the number of weak learners.
Learning_rate: Controls the contribution of weak learners in the final combination. There is a trade-off between learning_rate and n_estimators.
min_samples_split: Minimum number of observation which is required in a node to be considered for splitting. It is used to control overfitting.
min_samples_leaf: Minimum samples required in a terminal or leaf node. Lower values should be chosen for imbalanced class problems since the regions in which the minority class will be in the majority will be very small.
min_weight_fraction_leaf: similar to the previous but defines a fraction of the total number of observations instead of an integer.
max_depth : maximum depth of a tree. Used to control overfitting.
max_lead_nodes : maximum number of terminal leaves in a tree. If this is defined max_depth is ignored.
max_features : number of features it should consider while searching for the best split.

You can tune loss function for better performance.

Implementation in Python

You can find Gradient Boosting function in Scikit-Learn’s library.

# for regression
from sklearn.ensemble import GradientBoostingRegressor
model = GradientBoostingRegressor(n_estimators=3,learning_rate=1)
model.fit(X,Y)
# for classification
from sklearn.ensemble import GradientBoostingClassifier
model = GradientBoostingClassifier()
model.fit(X,Y)

XGBoost

XG Boost or Extreme Gradient Boosting is an advanced implementation of the Gradient Boosting. This algorithm has high predictive power and is ten times faster than any other gradient boosting techniques. Moreover, it includes a variety of regularization which reduces overfitting and improves overall performance.

Advantages

  • It implements regularization which helps in reducing overfit (Gradient Boosting does not have);
  • It implements parallel processing which is much faster than Gradient Boosting;
  • Allows users to define custom optimization objectives and evaluation criteria adding a whole new dimension to the model;
  • XGBoost has an in-built routine to handle missing values;
  • XGBoost makes splits up to the max_depth specified and then starts pruning the tree backwards and removes splits beyond which there is no positive gain;
  • XGBoost allows a user to run a cross-validation at each iteration of the boosting process and thus it is easy to get the exact optimum number of boosting iterations in a single run.

Boosting algorithms represent a different machine learning perspective which is turning a weak model to a stronger one to fix its weaknesses. I hope this article helped you understand how boosting works.

We have covered most of the topics related to algorithms in our series of machine learning blogs, click here. If you are inspired by the opportunities provided by machine learning, enroll in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape.


Your one-stop-shop for Machine Learning is just a click away. Access our live online training and find easy solutions to all your queries here.

Priyankur

Priyankur Sarkar

Data Science Enthusiast

Priyankur Sarkar loves to play with data and get insightful results out of it, then turn those data insights and results in business growth. He is an electronics engineer with a versatile experience as an individual contributor and leading teams, and has actively worked towards building Machine Learning capabilities for organizations.

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

Regression Analysis And Its Techniques in Data Science

As a Data Science enthusiast, you might already know that a majority of business decisions these days are data-driven. However, it is essential to understand how to parse through all the data. One of the most important types of data analysis in this field is Regression Analysis. Regression Analysis is a form of predictive modeling technique mainly used in statistics. The term “regression” in this context, was first coined by Sir Francis Galton, a cousin of Sir Charles Darwin. The earliest form of regression was developed by Adrien-Marie Legendre and Carl Gauss - a method of least squares. Before getting into the what and how of regression analysis, let us first understand why regression analysis is essential. Why is regression analysis important? The evaluation of relationship between two or more variables is called Regression Analysis. It is a statistical technique.  Regression Analysis helps enterprises to understand what their data points represent, and use them wisely in coordination with different business analytical techniques in order to make better decisions. Regression Analysis helps an individual to understand how the typical value of the dependent variable changes when one of the independent variables is varied, while the other independent variables remain unchanged.  Therefore, this powerful statistical tool is used by Business Analysts and other data professionals for removing the unwanted variables and choosing only the important ones. The benefit of regression analysis is that it allows data crunching to help businesses make better decisions. A greater understanding of the variables can impact the success of a business in the coming weeks, months, and years in the future.  Data Science The regression method of forecasting, as the name implies, is used for forecasting and for finding the casual relationship between variables. From a business point of view, the regression method of forecasting can be helpful for an individual working with data in the following ways: Predicting sales in the near and long term. Understanding demand and supply. Understanding inventory levels. Review and understand how variables impact all these factors. However, businesses can use regression methods to understand the following: Why did the customer service calls drop in the past months? How the sales will look like in the next six months? Which ‘marketing promotion’ method to choose? Whether to expand the business or to create and market a new product. The ultimate benefit of regression analysis is to determine which independent variables have the most effect on a dependent variable. It also helps to determine which factors can be ignored and those that should be emphasized. Let us now understand what regression analysis is and its associated variables. What is regression analysis?According to the renowned American mathematician John Tukey, “An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem". This is precisely what regression analysis strives to achieve.  Regression analysis is basically a set of statistical processes which investigates the relationship between a dependent (or target) variable and an independent (or predictor) variable. It helps assess the strength of the relationship between the variables and can also model the future relationship between the variables. Regression analysis is widely used for prediction and forecasting, which overlaps with Machine Learning. On the other hand, it is also used for time series modeling and finding causal effect relationships between variables. For example, the relationship between rash driving and the number of road accidents by a driver can be best analyzed using regression.  Let us now understand regression with an example. Meaning of RegressionLet us understand the concept of regression with an example. Consider a situation where you conduct a case study on several college students. We will understand if students with high CGPA also get a high GRE score. Our first job is to collect the details of the GRE scores and CGPAs of all the students of a college in a tabular form. The GRE scores and the CGPAs are listed in the 1st and 2nd columns, respectively. To understand the relationship between CGPA and GRE score, we need to draw a scatter plot.  Here, we can see a linear relationship between CGPA and GRE score in the scatter plot. This indicates that if the CGPA increases, the GRE scores also increase. Thus, it would also mean that a student with a high CGPA is likely to have a greater chance of getting a high GRE score. However, if a question arises like “If the CGPA of a student is 8.51, what will be the GRE score of the student?”. We need to find the relationship between these two variables to answer this question. This is the place where Regression plays its role. In a regression algorithm, we usually have one dependent variable and one or more than one independent variable where we try to regress the dependent variable "Y" (in this case, GRE score) using the independent variable "X" (in this case, CGPA). In layman's terms, we are trying to understand how the value of "Y" changes concerning the change in "X". Let us now understand the concept of dependent and independent variables. Dependent and Independent variables In data science, variables refer to the properties or characteristics of certain events or objects. There are mainly two types of variables while performing regression analysis which is as follows: Independent variables – These variables are manipulated or are altered by researchers whose effects are later measured and compared. They are also referred to as predictor variables. They are called predictor variables because they predict or forecast the values of dependent variables in a regression model. Dependent variables – These variables are the type of variable that measures the effect of the independent variables on the testing units. It is safer to say that dependent variables are completely dependent on them. They are also referred to as predicted variables. They are called because these are the predicted or assumed values by the independent or predictor variables. When an individual is looking for a relationship between two variables, he is trying to determine what factors make the dependent variable change. For example, consider a scenario where a student's score is a dependent variable. It could depend on many independent factors like the amount of study he did, how much sleep he had the night before the test, or even how hungry he was during the test.  In data models, independent variables can have different names such as “regressors”, “explanatory variable”, “input variable”, “controlled variable”, etc. On the other hand, dependent variables are called “regressand,” “response variable”, “measured variable,” “observed variable,” “responding variable,” “explained variable,” “outcome variable,” “experimental variable,” or “output variable.” Below are a few examples to understand the usage and significance of dependent and independent variables in a wider sense: Suppose you want to estimate the cost of living of a person using a regression model. In that case, you need to take independent variables as factors such as salary, age, marital status, etc. The cost of living of a person is highly dependent on these factors. Thus, it is designated as the dependent variable. Another scenario is in the case of a student's poor performance in an examination. The independent variable could be factors, for example, poor memory, inattentiveness in class, irregular attendance, etc. Since these factors will affect the student's score, the dependent variable, in this case, is the student's score.  Suppose you want to measure the effect of different quantities of nutrient intake on the growth of a newborn child. In that case, you need to consider the amount of nutrient intake as the independent variable. In contrast, the dependent variable will be the growth of the child, which can be calculated by factors such as height, weight, etc. Let us now understand the concept of a regression line. What is the difference between Regression and Classification?Regression and Classification both come under supervised learning methods, which indicate that they use labelled training datasets to train their models and make future predictions. Thus, these two methods are often classified under the same column in machine learning.However, the key difference between them is the output variable. In regression, the output tends to be numerical or continuous, whereas, in classification, the output is categorical or discrete in nature.  Regression and Classification have certain different ways to evaluate the predictions, which are as follows: Regression predictions can be interpreted using root mean squared error, whereas classification predictions cannot.  Classification predictions can be evaluated using accuracy, whereas, on the other hand, regression predictions cannot be evaluated using the same. Conclusively, we can use algorithms like decision trees and neural networks for regression and classification with small alterations. However, some other algorithms are more difficult to implement for both problem types, for example, linear regression for regressive predictive modeling and logistic regression for classification predictive modeling. What is a Regression Line?In the field of statistics, a regression line is a line that best describes the behaviour of a dataset, such that the overall distance from the line to the points (variable values) plotted on a graph is the smallest. In layman's words, it is a line that best fits the trend of a given set of data.  Regression lines are mainly used for forecasting procedures. The significance of the line is that it describes the interrelation of a dependent variable “Y” with one or more independent variables “X”. It is used to minimize the squared deviations of predictions.  If we take two variables, X and Y, there will be two regression lines: Regression line of Y on X: This gives the most probable Y values from the given values of X. Regression line of X on Y: This gives the most probable values of X from the given values of Y. The correlation between the variables X and Y depend on the distance between the two regression lines. The degree of correlation is higher if the regression lines are nearer to each other. In contrast, the degree of correlation will be lesser if the regression lines are farther from each other.  If the two regression lines coincide, i.e. only a single line exists, correlation tends to be either perfect positive or perfect negative. However, if the variables are independent, then the correlation is zero, and the lines of regression will be at right angles.  Regression lines are widely used in the financial sector and business procedures. Financial Analysts use linear regression techniques to predict prices of stocks, commodities and perform valuations, whereas businesses employ regressions for forecasting sales, inventories, and many other variables essential for business strategy and planning. What is the Regression Equation? In statistics, the Regression Equation is the algebraic expression of the regression lines. In simple terms, it is used to predict the values of the dependent variables from the given values of independent variables.  Let us consider one regression line, say Y on X and another line, say X on Y, then there will be one regression equation for each regression line: Regression Equation of Y on X: This equation depicts the variations in the dependent variable Y from the given changes in the independent variable X. The expression is as follows: Ye = a + bX Where,  Ye is the dependent variable, X is the independent variable, a and b are the two unknown constants that determine the position of the line. The parameter “a” indicates the distance of a line above or below the origin, i.e. the level of the fitted line, whereas parameter "b" indicates the change in the value of the independent variable Y for one unit of change in the dependent variable X. The parameters "a" and "b" can be calculated using the least square method. According to this method, the line needs to be drawn to connect all the plotted points. In mathematical terms, the sum of the squares of the vertical deviations of observed Y from the calculated values of Y is the least. In other words, the best-fitted line is obtained when ∑ (Y-Ye)2 is the minimum. To calculate the values of parameters “a” and “b”, we need to simultaneously solve the following algebraic equations: ∑ Y = Na + b ∑ X ∑ XY = a ∑ X + b ∑ X2 Regression Equation of X on Y: This equation depicts the variations in the independent variable Y from the given changes in the dependent variable X. The expression is as follows: Xe = a + bY  Where,  Xe is the dependent variable, Y is the independent variable, a and b are the two unknown constants that determine the position of the line. Again, in this equation, the parameter “a” indicates the distance of a line above or below the origin, i.e. the level of the fitted line, whereas parameter "b" indicates the slope, i.e. change in the value of the dependent variable X for a unit of change in the independent variable Y. To calculate the values of parameters “a” and “b” in this equation, we need to simultaneously solve the following two normal equations: ∑ X = Na + b ∑ Y ∑ XY = a ∑ Y + b ∑ Y2 Please note that the regression lines can be completely determined only if we obtain the constant values “a” and “b”. How does Linear Regression work?Linear Regression is a Machine Learning algorithm that allows an individual to map numeric inputs to numeric outputs, fitting a line into the data points. It is an approach to modeling the relationship between one or more variables. This allows the model to able to predict outputs. Let us understand the working of a Linear Regression model using an example. Consider a scenario where a group of tech enthusiasts has created a start-up named Orange Inc. Now, Orange has been booming since 2016. On the other hand, you are a wealthy investor, and you want to know whether you should invest your money in Orange in the next year or not. Let us assume that you do not want to risk a lot of money, so you buy a few shares. Firstly, you study the stock prices of Orange since 2016, and you see the following figure: It is indicative that Orange is growing at an amazing rate where their stock price has gone from 100 dollars to 500 dollars in only three years. Since you want your investment to boom along with the company's growth, you want to invest in Orange in the year 2021. You assume that the stock price will fall somewhere around $500 since the trend will likely not go through a sudden change. Based on the information available on the stock prices of the last couple of years, you were able to predict what the stock price is going to be like in 2021.  You just inferred your model in your head to predict the value of Y for a value of X that is not even in your knowledge. This mental method you undertook is not accurate anyway because you were not able to specify what exactly will be the stock price in the year 2021. You just have an idea that it will probably be above 500 dollars. This is where Regression plays its role. The task of Regression is to find the line that best fits the data points on the plot so that we can calculate where the stock price is likely to be in the year 2021.  Let us examine the Regression line (in red) by understanding its significance. By making some alterations, we obtained that the stock price of Orange is likely to be a little higher than 600 dollars by the year 2021. This example is quite oversimplified, so let us examine the process and how we got the red line on the next plot. Training the Regressor The example mentioned above is an example of Univariate Linear Regression since we are trying to understand the change in an independent variable X to one dependent variable, Y. Any regression line on a plot is based on the formula: f(X) = MX + B  Where, M is the slope of the line, B is the y-intercept that allows the vertical movement of the line, And X is the function’s input variable. In the field of Machine Learning, the formula is as follows: h(X) = W0 + W1X  Where, W0 and W1 are the weights, X is the input variable, h(X) is the label or the output variable. Regression works by finding the weights W0 and W1 that lead to the best-fitting line for the input variable X. The best-fitted line is obtained in terms of the lowest cost. Now, let us understand what does cost means here. The cost functionDepending upon the Machine Learning application, the cost could take different forms. However, in a generalized view, cost mainly refers to the loss or error that the regression model yields in its distance from the original training dataset. In a Regression model, the cost function is the Squared Error Cost: J(W0,W1) = (1/2n) Σ { (h(Xi) - Ti)2} for all i =1 until i = n    Where, J(W0, W1) is the total cost of the model with weights W0 and W1, h(Xi) is the model’s prediction of the independent variable Y at feature X with index  i, Ti is the actual y-value at index i, and n refers to the total number of data points in the dataset. The cost function is used to obtain the distance between the y-value the model predicted and the actual y-value in the data set. Then, the function squares this distance and divides it by the number of data points, resulting in the average cost. The 2 in the term ‘(1/2n)’ is merely to make the differentiation process in the cost function easier.  Training the dataset Training a regression model uses a Learning Algorithm to find the weights W0 and W1 that will minimize the cost and plug them into the straight-line function to obtain the best-fitted line. The pseudo-code for the algorithm is as follows: Repeat until convergence {      temp0 := W0 - a.((d/dW0) J(W0,W1))      temp1 := W1 - a.((d/dW1) J(W0,W1))      W0 = temp0      W1 = temp1  } Here, (d/dW0) and (d/dW1) refer to the partial derivatives of J(W0,, W1) concerning W0, and W1 respectively.  The gist of the partial differentiation is basically the derivatives: (d/dW0) J(W0,W1) = W0 + W1.X - T (d/dW1) j(W0,W1) = (W0 + W1.X - T).X Implementing the Gradient Descent Learning algorithm will result in a model with minimum cost. The weights that led to the minimum cost are dealt with as the final values for the line function h(X) = W0 + W1X.  Goodness-of-Fit in a Regression Model The Regression Analysis is a part of the linear regression technique. It examines an equation that lessens the distance between the fitted line and all data points. Determining how well the model fits the data is crucial in a linear model. The general idea is that if the deviations between the observed values and the predicted values of the linear model are small and unbiased, the model has well-fit data.  In technical terms, “Goodness-of-fit” is a mathematical model describing the differences between the observed and expected values or how well the model fits a set of observations. This measure can be used in statistical hypothesis testing.How do businesses use Regression Analysis? Regression Analysis is a statistical technique used to evaluate the relationship between two or more independent variables. Organizations use regression analysis to understand the significance of their data points and use analytical techniques to make better decisions.Business Analysts and Data Professionals use this statistical tool to delete unwanted variables and select the significant ones. There are numerous ways that businesses use regression analysis. Let us discuss some of them below. 1. Decision-makingBusinesses need to make better decisions to run smoothly and efficiently, and it is also necessary to understand the effects of the decision taken. They collect data on various factors such as sales, investments, expenditures, etc. and analyze them for further improvements. Organizations use the Regression Analysis method by making sense of the data and gathering meaningful insights. Business analysts and data professionals use this method to make strategic business decisions.2. Optimization of business The main role of regression analysis is to convert the collected data into actionable insights. The old-school techniques like guesswork and assuming a hypothesis have been eliminated by organizations. They are now focusing on adopting data-driven decision-making techniques, which improves the work performance in an organization. This analysis helps the management sectors in an organization to take practical and smart decisions. The huge volume of data can be interpreted and understood to gain efficient insights. 3. Predictive Analysis Businesses make use of regression analysis to find patterns and trends. Business Analysts build predictions about future trends using historical data. Regression methods can also go beyond predicting the impact on immediate revenue. Using this method, you can forecast the number of customers willing to buy a service and use that data to estimate the workforce needed to run that service. Most insurance companies use regression analysis to calculate the credit health of their policyholders and the probable number of claims in a certain period. Predictive Analysis helps businesses to: Minimize costs Minimize the number of required tools Provide fast and efficient results Detect fraud Risk Management Optimize marketing campaigns 4. Correcting errors Regression Analysis is not only used for predicting trends, but it is also useful to identify errors in judgements. Let us consider a situation where the executive of an organization wants to increase the working hours of its employees and make them work extra time to increase the profits. In such a case, regression analysis analyses all the variables and it may conclude that an increase in the working hours beyond their existing time of work will also lead to an increase in the operation expense like utilities, accounting expenditures, etc., thus leading to an overall decrease in the profit.   Regression Analysis provides quantitative support for better decision-making and helps organizations minimize mistakes. 5. New Insights Organizations generate a large amount of cluttered data that can provide valuable insights. However, this vast data is useless without proper analysis. Regression analysis is responsible for finding a relationship between variables by discovering patterns not considered in the model. For example, analyzing data from sales systems and purchase accounts will result in market patterns such as increased demand on certain days of the week or at certain times of the year. You can maintain optimal stock and personnel using the information before a demand spike arises. The guesswork gets eliminated by data-driven decisions. It allows companies to improve their business performance by concentrating on the significant areas with the highest impact on operations and revenue. Use cases of Regression AnalysisPharmaceutical companies Pharmaceutical organizations use regression analysis to analyze the quantitative stability data for the retest period or estimate shelf life. In this method, we find the nature of the relationship between an attribute and time. We determine whether the data should be transformed for linear regression analysis or non-linear regression analysis using the analyzed data. FinanceThe simple linear regression technique is also called the Ordinary Least Squares or OLS method. This method provides a general explanation for placing the line of the best fit among the data points.  This particular tool is used for forecasting and financial analysis. You can also use it with the Capital Asset Pricing Model (CAPM), which depicts the relationship between the risk of investing and the expected return. Credit Card Credit card companies use regression analysis to analyze various factors such as customer's risk of credit default, prediction of credit balance, expected consumer behaviour, and so on. With the help of the analyzed information, the companies apply specific EMI options and minimize the default among risky customers. When Should I Use Regression Analysis? Regression Analysis is mainly used to describe the relationships between a set of independent variables and the dependent variables. It generates a regression equation where the coefficients correspond to the relationship between each independent and dependent variable.  Analyze a wide variety of relationships You can use the method of regression analysis to perform many things, for example: To model multiple independent variables. Include continuous and categorical variables. Use polynomial terms for curve fitting. Evaluate interaction terms to examine whether the effect of one independent variable is dependent on the value of another variable.  Regression Analysis can untangle very critical problems where the variables are entwined. Consider yourself to be a researcher studying any of the following: What impact does socio-economic status and race have on educational achievement? Do education and IQ affect earnings? Impact of exercise habits and diet affect weight. Do drinking coffee and smoking cigarettes reduce the mortality rate? Does a particular exercise have an impact on bone density? These research questions create a huge amount of data that entwines numerous independent and dependent variables and question their influence on each other. It is an important task to untangle this web of related variables and find out which variables are statistically essential and the role of each of these variables. To answer all these questions and rescue us in this game of variables, we need to take the help of regression analysis for all the scenarios. Control the independent variables Regression analysis describes how the changes in each independent variable are related to the changes in the dependent variable and how it is responsible for controlling every variable in a regression model. In the process of regression analysis, it is crucial to isolate the role of each variable. Consider a scenario where you participated in an exercise intervention study. You aimed to determine whether the intervention was responsible for increasing the subject's bone mineral density. To achieve an outcome, you need to isolate the role of exercise intervention from other factors that can impact the bone density, which can be the diet you take or any other physical activity. To perform this task, you need to reduce the effect of the unsupportive variables. Regression analysis estimates the effect the change in one dependent variable has on the dependent variables while all other independent variables are constant. This particular process allows you to understand each independent variable's role without considering the other variables in the regression model. Now, let us understand how regression can help control the other variables in the process. According to a recent study on the effect of coffee consumption on mortality, the initial results depicted that the higher the intake of coffee, the higher is the risk of death. However, researchers did not include the fact that most coffee drinkers smoke in their first model. After smoking was included in the model, the regression results were quite different from the initial results. It depicted that coffee intake lowers the risk of mortality while smoking increases it. This model isolates the role of each variable while holding the other variables constant. You can examine the effect of coffee intake while controlling the smoking factor. On the other hand, you can also look at smoking while controlling for coffee intake. This particular example shows how omitting a significant variable can produce misleading results and causes it to be uncontrolled. This warning is mainly applicable for observational studies where the effects of omitted significant variables can be unbalanced. This omitted variable bias can be minimized in a randomization process where true experiments tend to shell out the effects of these variables in an equal manner. What are Residuals in Regression Analysis? Residuals identify the deviation of observed values from the expected values. They are also referred to as error or noise terms. It gives an insight into how good our model is against the actual value, but there are no real-life representations of residual values. Calculating the real values of intercept, slope, and residual terms can be a complicated task. However, the Ordinary Least Square (OLS) regression technique can help us speculate on an efficient model.  The technique minimizes the sum of the squared residuals. With the help of the residual plots, you can check whether the observed error is consistent with stochastic error (differences between the expected and observed values must be random and unpredictable). What are the Linear model assumptions in Regression Analysis? Regression Analysis is the first step in the process of predictive modeling. It is quite easy to implement, and its syntax and parameters do not create any kind of confusion. However, the purpose of regression analysis is not just solved by running a single line of code. It is much more than that. The function plot(model_name) returns four plots in the R programming language. Each of these plots provides essential information about the dataset. Most beginners in the field are unable to trace the information. But once you understand these plots, you can bring important improvements to your regression model. For significant improvements in your regression model, it is also crucial to understand the assumptions you need to take in your model and how you can fix them if any assumption gets violated. The four assumptions that should be met before conducting linear regression are as follows:  Linear Relationship: A linear relationship exists between the independent variable, x, and the dependent variable, y.   Independence: The residuals in linear regression are independent. In other words, there is no correlation between consecutive residuals in time series data.  Homoscedasticity: Residuals have constant variance at every level of X.  Normality: The residuals of the model are normally distributed. Assumption 1: Linear Relationships Explanation The first assumption in Linear regression is that there is a linear relationship between the independent variable X and the dependent variable Y. How to determine if this assumption is met The quickest and easiest way to detect this assumption is by creating a scatter plot of X vs Y. By looking at the scatter plot, you can have a visual representation of the linear relationship between the two variables. If the points in the plot could fall along a straight line, then there exists some type of linear relationship between the variables, and this assumption is met. For example, consider this first plot below. The points in the plot look like they fall roughly on a straight line, which indicates that there exists a linear relationship between X and Y: However, there doesn’t appear to be a linear relationship between X and Y in this second plot below:  And in this third plot, there appears to be a clear relationship between X and Y, but a linear relationship between:What to do if this assumption is violated If you create a scatter plot between X and Y and do not find any linear relationship between the two variables, then you can do two things: You can apply a non-linear transformation to the dependent or independent variables. Common examples might include taking the log, the square root, or the reciprocal of the independent and dependent variable. You can add another independent variable to the regression model. If the plot of X vs Y has a parabolic shape, then adding X2 as an additional independent variable in the linear regression model might make sense. Assumption 2: Independence Explanation The second assumption of linear regression is that the residuals should be independent. Its relevance can be seen while working with time-series data. In an ideal manner, a pattern among consecutive residuals is not what we want. For example, in a time series model, the residuals should not grow steadily along with time.  How to determine if this assumption is met To determine if this assumption is met, we need to have a scatter plot of residuals vs time and look at the residual time series plot. In an ideal plot, the residual autocorrelations should fall within the 95% confidence bands around zero, located at about +/- 2-over the square root on n, where n denotes the sample size.  You can also perform the Durbin-Watson test to formally examine if this assumption is met. What to do if this assumption is violated If this assumption is violated, you can do three things which are as follows: If there is a positive serial correlation, you can add lags of the independent variable or dependent variable to the regression model.  If there is a negative serial correlation, check that none of the variables has differences.  If there is a seasonal correlation, consider adding a seasonal dummy variable into your regression model.  Assumption 3: HomoscedasticityExplanation  The third assumption of linear regression is that the residuals should have constant variance at every level of X. This property is called homoscedasticity. When homoscedasticity is not present, the residuals suffer from heteroscedasticity. The outcome of the regression analysis becomes hard to trust when heteroscedasticity is present in the model. It increases the variance of the regression coefficient estimates, but the model does not recognize this fact. This makes the model declare that a term in the model is significantly crucial, but it is not. How to determine if this assumption is met To determine if this assumption is met, we need to have a scatter plot of fitted values vs residual plots. To achieve this, you need to fit a regression line into a data set.  Below is a scatterplot showing a typical fitted value vs residual plot in which heteroscedasticity is present: You can observe how the residuals become much more spread out as the fitted values get larger. The “cone” shape is a classic sign of heteroscedasticity:  What to do if this assumption is violated If this assumption is violated, you can do three things which are as follows: Transform the dependent variable: The most common transformation is simply taking the dependent variable's log. Consider if you are using population size as an independent variable to predict the number of flower shops in a city as the dependent variable. You need to use population size to predict the number of flower shops in a city. It causes heteroscedasticity to go away.  Redefine the dependent variable: One common way is to use a rate rather than the raw value. Consider the previous example. In that case, use population size to predict the number of flower shops per capita instead. This reduces the variability that naturally occurs among larger populations.  Use weighted regression: The third way to fix heteroscedasticity is to use weighted regression. In this regression method, we assign a weight to each data point depending on the variance of its fitted value, giving small weights to data points having higher variances, which shrinks their squared residuals. When the proper weights are used, the problem of heteroscedasticity gets eradicated. Assumption 4: Normality Explanation We need to take the last assumption that the residuals should be normally distributed. How to determine if this assumption is met To determine if this assumption is met, there are two common ways to achieve that: 1. Use Q-Q plots to examine the assumption visually. Also known as the quantile-quantile plot, it is used to determine whether or not the residuals of the regression model follow a normal distribution. The normality assumption is achieved if the points on the plot roughly form a straight diagonal line as follows: However, this Q-Q plot below shows when the residuals clearly deviate from a straight diagonal line, they do not follow a normal distribution:  2. Some other formal statistical tests to check the normality assumption are Shapiro-Wilk, Kolmogorov-Smirnov, Jarque-Barre, and D'Agostino-Pearson.  These tests however have a limitation as they are used only when there are large sample sizes and it often results that the residuals are not normal. Therefore, graphical techniques like Q-Q plots are easier to check the normality assumption and are also more preferable. What to do if this assumption is violatedIf this assumption is violated, you can do two things which are as follows: Firstly, examine if outliers are present and exist, make sure they are real values and aren’t data errors. Also, verify that any outliers aren’t having a large impact on the distribution. Secondly, you can apply a non-linear transformation to the independent and/or dependent variables. Common examples include taking the log, the square root, or the reciprocal of the independent and/or dependent variable. How to perform a simple linear regression?The formula for a simple linear regression is: Y = B0 + B1X + e Where, Y refers to the predicted value of the dependent variable Y for any given value of the independent variable X. B0 denotes the intercept, i.e. the predicted value of y when the x is 0. B1 denotes the regression coefficient, i.e. how much we expect the value of y to change as the value of x increases. X refers to the independent variable, or the variable we expect is influencing y). e denotes the error estimate, i.e. how much variation exists in our regression coefficient estimate. The Linear regression model's task is to find the best-fitted line through the data by looking out for the regression coefficient B1 that minimizes the total error estimate e of the model. Simple linear regression in R R is a free statistical programming language that most data professionals use very powerful and widely. Let us consider a dataset of income and happiness that we will use to perform regression analysis.The first task is to load the income.data dataset into the R environment, and then generate a linear model describing the relationship between income and happiness by the command as follows: income.happiness.lm | t |) column displays the p-value, which tells us how probable we are to see the estimated effect of income on happiness considering the null hypothesis of no effect were true. We can reject the null hypothesis since the p-value is very low (p < 0.001), and finally, we can conclude that income has a statistically crucial effect on happiness. The most important thing here in the linear regression model is the p-value. In this example, it is quite significant (p < 0.001), which shows that this model is a good fit for the observed data. Presenting the results While presenting your results, you should include the regression coefficient, standard error of the estimate, and the p-value. You should also interpret your numbers so that readers can have a clear understanding of the regression coefficient: A significant relationship (p < 0.001) has been found between income and happiness (R2 = 0.71 ± 0.018), with a 0.71-unit increase in reported happiness for every $10,000 increase in income. For a simple linear regression, you can simply plot the observations on the x and y-axis of a scatter plot and then include the regression line and regression function.What is multiple regression analysis?Multiple Regression is an extension of simple linear regression and is used to estimate the relationship between two or more independent variables and one dependent variable. You can perform multiple regression analysis to know: The strength of the relationship between one or more independent variables and one dependent variable. For example, you can use it to understand whether the exam performance can be predicted based on revision time, test anxiety, lecture attendance, and gender.  The overall fit, i.e. variance of the model and the relative impact of each of the predictors to the total variance explained. For example, you might want to know how much of the variation in the student’s exam performance can be understood by revision time, test anxiety, lecture attendance, gender, and the relative impact of each independent variable in explaining the variance. How to perform multiple linear regression? The formula for multiple linear regression is: Y = B0 + B1X1 + … + BnXn + e Where, Y refers to the predicted value of the dependent variable Y for any given value of the independent variable X. B0 denotes the intercept, i.e. the predicted value of y when the x is 0. B1X1  denotes the regression coefficient (B1), i.e. how much we expect the value of Y to change as the value of X increases. ... does the same for all the independent variables we want to test. BnXn refers to the regression coefficient of the last independent variable e denotes the error estimate of the model, i.e. how much variation exists in our estimate of the regression coefficient. It is the task of the Multiple Linear regression model to find the best-fitted line through the data by calculating the following three things: The regression coefficients will lead to the least error in the overall multiple regression model. The t-statistic of the overall regression model. The associated p-value  The multiple regression model also calculates the t-statistic and p-value for each regression coefficient. Multiple linear regression in R Let us consider a dataset of the heart and other factors that affect the functioning of our heart to perform multiple regression analyses. The first task is to load the heart.data dataset into the R environment, and then generate a linear model describing the relationship between heart disease and biking to work by the command as follows: heart.disease.lm| t |) column displays the p-value, which tells us how probable we are to see the estimated effect of income on happiness considering the null hypothesis of no effect were true. We can reject the null hypothesis since the p-value is very low (p < 0.001), and finally, we can conclude that both - biking to work and smoking - have influenced rates of heart disease. The most important thing here in the linear regression model is the p-value. In this example, it is quite significant (p < 0.001), which shows that this model is a good fit for the observed data. Presenting the results While presenting your results, you should include the regression coefficient, standard error of the estimate, and the p-value. You should also interpret your numbers in the proper context so that readers can have a clear understanding of the regression coefficient:  In our survey of 500 towns, we found significant relationships between the frequency of biking to work and the frequency of heart disease and the frequency of smoking and heart disease (p < 0.001 for each). Specifically, we found a 0.2% decrease (± 0.0014) in the frequency of heart disease for every 1% increase in biking and a 0.178% increase (± 0.0035) in the frequency of heart disease for every 1% increase in smoking. For multiple linear regression, you can simply plot the observations on the X and Y-axis of a scatter plot and then include the regression line and regression function: In this example, we have calculated the predicted values of the dependent variable heart disease across the observed values for the percentage of people biking to work. However, to include the effect of smoking on the independent variable heart disease, we had to calculate the predicted values by holding the variable smoking as constant at the minimum, mean, and maximum observed smoking rates. What is R-squared in Regression Analysis? In data science, R-squared (R2) is the coefficient of determination or the coefficient of multiple determination in case of multiple regression.  In the linear regression model, R-squared acts as an evaluation metric to evaluate the scatter of the data points around the fitted regression line. It recognizes the percentage of variation of the dependent variable. R-squared and the Goodness-of-fit R-squared is the proportion of variance in the dependent variable that the independent variable can explain.The value of R-squared stays between 0 and 100%: 0% corresponds to a model that does not explain the variability of the response data around its mean. The mean of the dependent variable helps predict the dependent variable and the regression model. On the other hand, 100% corresponds to a model that explains all the variability of the response variable around its mean. If your value of R2  is large, you have a better chance of your regression model fitting the observations.Although you get essential insights about the regression model in this statistical measure, you should not depend on it for the complete assessment of the model. It lacks information about the relationship between the dependent and the independent variables. It also does not inform about the quality of the regression model. Hence, as a user, you should always analyze R2 and other variables and then derive conclusions about the regression model. Visual Representation of R-squared You can visually demonstrate the plots of fitted values by observed values in a graphical manner. It illustrates how R-squared values represent the scatter around the regression line.  As observed in the pictures above, the value of R-squared for the regression model on the left side is 17%, and for the model on the right is 83%. When the variance accounts to be high in a regression model, the data points tend to fall closer to the fitted regression line.  However, a regression model with an R2 of 100% is an ideal scenario that is impossible. In such a case, the predicted values equal the observed values, leading all the data points to fall exactly on the regression line.  Interpretation of R-squared The simplest interpretation of R-squared is how good the regression model fits the observed data values. Let us loot at an example to understand this. Consider a model where the  R2  value is 70%. This would mean that the model explains 70% of the fitted data in the regression model. Usually, when the R2  value is high, it suggests a better fit for the model. The correctness of the statistical measure does not only depends on R2. Still, it can depend on other several factors like the nature of the variables, the units on which the variables are measured, etc. So, a high R-squared value is not always likely for the regression model and can indicate problems too.A low R-squared value is a negative indicator for a model in general. However, if we consider the other factors, a low R2 value can also result in a good predictive model. Calculation of R-squared R- squared can be evaluated using the following formula:  Where: SSregression – Explained sum of squares due to the regression model. SStotal – The total sum of squares. The sum of squares due to regression assesses how well the model represents the fitted data. The total sum of squares measures the variability in the data used in the regression model.Now let us come back to the earlier situation where we have two factors: the number of hours of study per day and the score in a particular exam to understand the calculation of R-squared more effectively. Here, the target variable is represented by score and the independent variable by the number of study hours per day.  In this case, we will need a simple linear regression model and the equation of the model will be as follows:  ŷ = w1x1 + b  The parameters w1 and b can be calculated by reducing the squared error over all the data points. The following equation is called the least square function:minimize ∑(yi –  w1x1i – b) Now, R-squared calculates the amount of variance of the target variable explained by the model, i.e. function of the independent variable. However, to achieve that, we need to calculate two things: Variance of the target variable: var(avg) = ∑(yi – Ӯ)2 The variance of the target variable around the best-fit line: var(model) = ∑(yi – ŷ)2Finally, we can calculate the equation of R-squared as follows:  R2 = 1 – [var(model)/var(avg)] = 1 -[∑(yi – ŷ)2/∑(yi – Ӯ)2]    What are the different types of regression analysis?   Other than simple linear regression and multiple linear regression, there are mainly 5 types of regression techniques. Let us discuss them one by one.  Polynomial RegressionIn a polynomial regression technique, the power of the independent variable has to more than 1. The expression below shows a polynomial equation: y = a + bx2  In this regression technique, the best-fitted line is a curve line instead of a straight line that fits into the data points. An important point to keep in mind while performing polynomial regression is, if you try to fit a polynomial of a higher degree to get a lower error, it might result in overfitting.  You should always plot the relationships to see the fit and always make sure that the curve fits the nature of the problem. An example to illustrate how plotting can help: Logistic Regression The logistic regression technique is used when the dependent variable is discrete in nature. For example, 0 or 1, true or false, etc. The target variable in this regression can have only two values and the relation between the target variable and the independent variable is denoted by a sigmoid curve. To measure the relationship between the target variable and independent variables,  Logit function is used. The expression below shows a logistic equation: logit(p) = ln(p/(1-p)) = b0 + b1X1 + b2X2 + b3X3 …. + bkXk Where,  p denotes the probability of occurrence of the feature. Ridge Regression The Ridge Regression technique is usually used when there is a high correlation between the independent variables. This is because the least square estimates result in unbiased values when there are multi collinear data.  However, if the collinearity is very high, there exists some bias value. Therefore, it is crucial to introduce a bias matrix in the equation of Ridge Regression. This regression method is quite powerful where the model is less susceptible to overfitting. The expression below shows a ridge regression equation: β = (X^{T}X + λ*I)^{-1}X^{T}y The lambda (λ) in the equation solves the issue of multicollinearity. Lasso Regression Lasso Regression is one of the types of regression in machine learning that is responsible for performing regularization and feature selection. It restricts the absolute size of the regression coefficient, due to which the coefficient value gets nearer to zero.The feature selection method in Lasso Regression allows the selection of a set of features from the dataset to build the model. Only the required features are used in this regression, while others are made zero. This helps in avoiding overfitting in the model.  If the independent variables are highly collinear, then this regression technique takes only one variable and makes other variables shrink to zero. The expression below shows a lasso regression equation: N^{-1}Σ^{N}_{i=1}f(x_{i}, y_{I}, α, β) Bayesian RegressionIn the Bayesian Regression method, the Bayes theorem is used to determine the value of regression coefficients. In this linear regression technique, the posterior distribution of the features is evaluated other than finding the least-squares.  Bayesian Linear Regression collaborates with Linear Regression and Ridge Regression but is more stable than simple Linear Regression. What are the terminologies used in Regression Analysis? When trying to understand the outcome of regression analysis, it is important to understand the key terminologies used to acknowledge the information.  A comprehensive list of regression analysis terms used are described below: Estimator: An estimator is an algorithm for generating estimates of parameters when the relevant dataset is present. Bias: An estimate is said to be unbiased when its expectation is the same as the value of the parameter that is being estimated. On the other hand, if the expectation is the same as the value of the estimated parameter, it is said to be biased. Consistency: An estimator is consistent if the estimates it produces converge on the value of the true parameter considering the sample size increases without limit. For example, an estimator that produces estimates θ^ for some value of parameter θ, where ^ is a small number. If the estimator is consistent, we can make the probability as close to 1.0 or as small as we like by drawing a sufficiently large sample.  Efficiency: An estimator “A” is said to be more efficient than an estimator “B” when “A” has a smaller sampling variance, i.e. if the specific values of “A” are more tightly clustered around their expectation. Standard error of the Regression (SER): It is defined as estimating the standard deviation of the error term in a regression model. Standard error of regression coefficient: It is defined as estimating the standard deviation of the sampling distribution for a particular coefficient term. P-value: P-value is the probability when the null hypothesis is considered true, of drawing sample data that are as adverse to the null as the data drawn, or more so. When the p-value is small, there are two possibilities for that – firstly, a low-probability unrepresentative sample is drawn, or secondly, the null hypothesis is false. Significance level: For a hypothesis test, the significance test is the smallest p-value for which the null hypothesis is not rejected. If the significance level is 1%, the null is rejected if and only if the p-value for the test is less than 0.01. The significance level can also be defined as the probability of making a type 1 error, i.e. rejecting a true null hypothesis. Multicollinearity: It is a situation where there is a high degree of correlation among the independent variables in a regression mod. In other words, a situation where some of the X values are close to being linear combinations of other X values. Multicollinearity occurs due to large standard errors and when the regression model cannot produce precise parameter estimates. This problem mainly occurs while estimating causal influences.T-test: The t-test is a common test for the null hypothesis that Bi's particular regression parameter has some specific value. F-test: F-test is a method for jointly testing a set of linear restrictions on a regression model. Omitted variable bias: Omitted variable bias is a bias in estimating regression parameters. It generally occurs when a relevant independent variable is omitted from a model, and the omitted variable is correlated with one or more of the included variables. Log variables: It is a transformation method that allows the estimation of a non-linear model using the OLS method to exchange the natural log of a variable for the level of that variable. It is performed for the dependent variable and/or one or more independent variables. Quadratic terms: This is another common transformation method where both xi and x2i are included as regressors. The estimated effect of xi on y is calculated by finding the derivative of the regression equation concerning xi.  Interaction terms: These are the pairwise products of the "original" independent variables. The interaction terms allow for the possibility that the degree to which xi affects y depends on the value of some other variable Xj. For example, the effect of experience on wages xi might depend on the gender xj of the worker. What are the tips to avoid common problems working with regression analysis? Regression is a very powerful statistical analysis that offers high flexibility but presents a variety of potential pitfalls. Let us see some tips to overcome the most common problems whilst working with regression analysis.Tip 1:  Research Before Starting Before you start working with regression analysis, review the literature to understand the relevant variables, the relationships they have, and the expected coefficient signs and effect magnitudes. It will help you collect the correct data and allow you to implement the best regression equation.  Tip 2: Always prefer Simple Models Start with a simple model and then make it more complicated only when needed. When you have several models with different predictive abilities, always prefer the simplest model because it will be more likely to be the best model. Another significant benefit of simpler models is that they are easier to understand and explain to others.  Tip 3: Correlation Does Not Imply Causation  Always remember correlation doesn't imply causation. Causation is a completely different thing as compared to causation. In general, to establish causation, you need to perform a designed experiment with randomization. However, If you’re using regression analysis to analyze the uncollected data in an experiment, causation is uncertain.Tip 4: Include Graphs, Confidence, and Prediction Intervals in the Results   The presentation of your results can influence the way people interpret them. For instance, confidence intervals and statistical significance provide consistent information.  According to a study, statistical reports that refer only to statistical significance only bring about correct interpretations 40% of the time. On the other hand, when the results also include confidence intervals, the percentage rises to 95%. Tip 5: Check the Residual Plots Residual plots are the quickest and easiest method to examine the problems in a regression model and allow you to make adjustments. For instance, residual plots help display patterns when you cannot model curvature present in your data. Regression Analysis and The Real World  Let us summarize what we have covered in this article so far: Regression Analysis and its importance. Difference between regression and classification. Regression Line and Regression Equation. How companies use regression analysis When to use regression analysis. Assumptions in Regression Analysis. Simple and Multiple linear regression. R-squared: Representation, Interpretation, Calculation. Types of Regression. Terminologies used in Regression. How to avoid problems in regression. Regression Analysis is an interesting machine learning technique utilized extensively by enterprises to transform data into useful information. It continues to be a significant asset to many leading sectors starting from finance, education, banking, retail, medicine, media, etc.  
5628
Regression Analysis And Its Techniques in Data Sci...

As a Data Science enthusiast, you might already ... Read More

Top Job Roles With Their Salary Data in the World of Data Science for 2020–2022

Data Science requires the expertise of professionals who possess the skill of collecting, structuring, storing, handling and analyzing data, allowing individuals and organizations to make decisions based on insights generated from the data. Data science is woven into the fabric of our daily lives in myriad ways that we may not even be aware of; starting from the online purchases we make, our social media feeds, the music we listen to or even the movie recommendations that we are shown online.  For several years in a row, the job of a data scientist has been hailed as the “hottest job of the 21st century”. Data scientists are among the highest paid resources in the IT industry. According to Glassdoor, the average data scientist’s salary is $113,436. With the growth of data, the demand for data science job roles in companies has been rising at an accelerated pace.  How Data Science is a powerful career choice The landscape of a data science job is promising and full of opportunities spanning different industries. The nature of the job allows an individual to take on flexible remote jobs and also to be self-employed.  The field of data science has grown exponentially in a very short time, as companies have come to realize the importance of gathering huge volumes of data from websites, devices, social media platforms and other sources, and using them for business benefits. Once the data is made available, data scientists use their analytical skills, evaluate data and extract valuable information that allows organizations to enhance their innovations. A data scientist is responsible for collecting, cleansing, modifying and analyzing data into meaningful insights. In the first phase of their career, a data scientist generally works as a statistician or data analyst. Over many years of experience, they evolve to be data scientists.  The ambit of data has been increasing rapidly which has urged companies to actively recruit data scientists to harness and leverage insights from the huge quantities of valuable data available, enabling efficiency in processes and operations and driving sales and growth.  In the future, data may also emerge as the turning point of the world economy. So, pursuing a career in data science would be very useful for a computer enthusiast, not only because it pays well but also since it is the new trend in IT. According to the Bureau of Labor Statistics (BLS), jobs for computer and information research scientists, as well as data scientists are expected to grow by 15 percent by the year 2028. Who is a Data Scientist & What Do They Do? Data Scientists are people with integral analytical data expertise together with complex problem-solving skills, besides the curiosity to explore a wide range of emerging issues.  They are considered to be the best of both the sectors – IT and business, which makes them extremely skilled individuals whose job roles straddle the worlds of computer science, statistics, and trend analysis. Because of this surging demand for data identification and analysis in various tech fields like AI, Machine Learning, and Data Science, the salary of a data scientist is one of the highest in the world. Requisite skills for a data scientist Before we see the different types of jobs in the data analytics field, we must be aware of the prerequisite skills that make up the foundation of a data scientist: Understanding of data – As the name suggests, Data Science is all about data. You need to understand the language of data and the most important question you must ask yourself is whether you love working with data and crunching numbers. And if your answer is “yes”, then you’re on the right track. Understanding of algorithms or logic – Algorithms are a set of instructions that are given to a computer to perform a particular task. All Machine Learning models are based on algorithms, so it is quite an essential prerequisite for a would-be data scientist to understand the logic behind it.  Understanding of programming – To be an expert in data science, you do not need to be an expert coder. However, you should have the foundational programming knowledge which includes variables, constants, data types, conditional statements, IO functions, client/server, Database, API, hosting, etc. If you feel comfortable working with these and you have your coding skills sorted, then you’re good to go. Understanding of Statistics – Statistics is one of the most significant areas in the field of Data Science. You should be well aware of terminologies such as mean, median, mode, standard deviation, distribution, probability, Bayes’ theorem, and different Statistical tests like hypothesis testing, chi-square, ANOVA, etc. Understanding of Business domain – If you do not have an in-depth working knowledge of the business domain, it will not really prove to be an obstacle in your journey of being a data scientist. However, if you have the primitive understanding of the specific business area you are working for, it will be an added advantage that can take you ahead. Apart from all the above factors, you need to have good communication skills which will help the entire team to get on the same page and work well together.Data Science Job Roles  Data science experts are in demand in almost every job sector, and are not confined to the IT industry alone.  Let us look at some major job roles, associated responsibilities , and the salary range: 1. Data ScientistsA Data Scientist’s job is as exciting as it is rewarding. With the help of Machine Learning, they handle raw data and analyze it with various algorithms such as regression, clustering, classification, and so on. They are able to arrive at insights that are essential for predicting and addressing complex business problems.  Responsibilities of Data Scientists The responsibilities of Data Scientists are outlined below: Collecting huge amounts of organized and unorganized data and converting them into useful insights. Using analytical skills like text analytics, machine learning, and deep learning to identify potential solutions which will help in the growth of organizations. Following a data-driven approach to solve complex problems.  Enhancing data accuracy and efficiency by cleansing and validating data. Using data visualization to communicate significant observations to the organization’s stakeholders. Data Scientists Salary Range According to Glassdoor, the average Data Scientist salary is $113,436 per annum. The median salary of an entry-level professional can be around $95,000 per annum. However, early level data scientists with 1 to 4 years' experience can get around $128,750 per annum while the median salary for those with more experience ranging around 5 to 9 years  can rise to an average of $165,000 per annum. 2. Data Engineers  A Data Engineer is the one who is responsible for building a specific software infrastructure for data scientists to work. They need to have in-depth knowledge of technologies like Hadoop and Big Data such as MapReduce, Hive, and SQL. Half of the work of Data Engineers is Data Wrangling and it is advantageous if they have a software engineering background. Responsibilities of Data Engineers  The responsibilities of Data Engineers are described below: Collecting data from different sources and then consolidating and cleansing it. Developing essential software for extracting, transforming, and loading data using SQL, AWS, and Big Data. Building data pipelines using machine learning algorithms and statistical techniques. Developing innovative ways to enhance data efficiency and quality. Developing, testing and maintaining data architecture. Required Skills for Data Engineers  There are certain skill sets that data engineers need to have: Strong skills in analytics to manage and work with massive unorganized datasets. Powerful programming skills in trending languages like Python, Java, C++, Ruby, etc. Strong knowledge of database software like SQL and experience in relational databases. Managerial and organizational skills along with fluency in various databases.  Data Engineers’ Salary Range According to Glassdoor, the average salary of a Data Engineer is $102,864 in the USA. Reputed companies like Amazon, Airbnb, Spotify, Netflix, IBM value and pay high salaries to data engineers. Entry-level data and mid-range data engineers get an average salary between $110,000 and $137,770 per annum. However, with experience, a data engineer can get up to $155,000 in a year. 3. Data Analyst As the name suggests, the job of a Data Analyst is to analyze data. A data analyst collects, processes, and executes statistical data analyses which help business users to develop meaningful insights. This process requires creating systems using programming languages like Python, R or SAS. Companies ranging from IT, healthcare, automobile, finance, insurance employ Data Analysts to run their businesses efficiently.  Responsibilities of Data Analysts  The responsibilities of Data Analysts are described below: Identifying correlations and gathering valuable patterns through data mining and analyzing data. Working with customer-centric algorithms and modifying them to suit individual customer demands. Solving certain business problems by mapping data from numerous sources and tracing them. Creating customized models for customer-centric market strategies, customer tastes, and preferences. Conducting consumer data research and analytics by deploying statistical analysis. Data Analyst Salary Range According to Glassdoor, the national average salary of a Data Analyst is $62,453 in the United States. The salaries of an entry-level data analyst start at  $34,5000 per year or $2875 per month.  Glassdoor states that a junior data analyst earns around $70,000 per year and experienced senior data analysts can expect to be paid around $107,000 per year which is roughly $8916 per month. Key Reasons to Become a Data Scientist Becoming a Data Scientist is a dream for many data enthusiasts. There are some basic reasons for this: 1. Highly in-demand field The job of Data Science is hailed as one of the most sought after jobs for 2020 and according to an estimate, it is predicted that this field would generate around 11.5 million jobs by the year 2026. The demand for expertise in data science is increasing while the supply is too low.  This shortage of qualified data scientists has escalated their demand in the market. A survey by the MIT Sloan Management Review indicates that 43 percent of companies report that a major challenge to their growth has been a lack of data analytic skills. 2. Highly Paid & Diverse Roles Since data analytics form the central part of decision-making, companies are willing to hire larger numbers of data scientists who can help them to make the right decisions that will boost business growth. Since it is a less saturated area with a mid-level supply of talents, various opportunities have emerged that require diverse skill sets. According to Glassdoor, in the year 2016,  data science was the highest-paid field across industries. 3. Evolving workplace environments With the arrival of technologies like Artificial Intelligence and Robotics which fall under the umbrella of data science, a vast majority of manual tasks have been replaced with automation.  Machine Learning has made it possible to train machines to perform repetitive tasks , freeing up humans to focus on critical problems that need their attention. Many new and exciting technologies have emerged within this field such as Blockchain, Edge Computing, Serverless Computing, and others.  4. Improving product standards The rigorous use of Machine Learning algorithms for regression, classification recommendation problems like decision trees, random forest, neural networks, naive Bayes etc has boosted the customer experiences that companies desire to have. One of the best examples of such development is the E-commerce sites that use intelligent Recommendation Systems to refer products and provide customer-centric insights depending upon their past purchases. Data Scientists serve as a trusted adviser to such companies by identifying the preferred target audience and handling marketing strategies. 5. Helping the world In today’s world, almost everything revolves around data. Data Scientists extract hidden information from massive lumps of data which helps in decision making across industries ranging from finance and healthcare to manufacturing, pharma, and engineering. Organizations are equipped with data-driven insights that boost productivity and enhance growth, even as they optimize resources and mitigate potential risks. Data Science catalyzes innovation and research, bringing positive changes across the world we live in. Factors Affecting a Data Scientist’s Salary The salaries of Data Scientists can depend upon several factors. Let us study them one by one and understand their significance: Data Scientist Salary by Location The number of job opportunities and the national data scientist salary for data innovators is the highest in Switzerland in the year 2020, followed by the Netherlands and the United Kingdom. However, since Silicon Valley in the United States is the hub of new technological innovations, it is considered to generate the most jobs for startups in the world, followed by Bangalore in India. A data scientist’s salary in Silicon Valley or Bangalore is likely to be higher than in other countries. Below are the highest paying countries for data scientist roles along with their average annual data science salary: Switzerland$115,475Netherlands$68,880Germany$64,024United Kingdom$59,781Spain$30,050Italy$37,785Data Scientist Salary by ExperienceA career in the field of data science is very appealing to young IT professionals. Starting salaries are very lucrative, and there is incremental growth in salary  with  experience. Salaries of a data scientist depend on the expertise, as well as the years of experience: Entry-level data scientist salary – The median entry-level salary for a data scientist is around $95,000 per year which is quite high. Mid-level data scientist salary –   The median salary for a mid-level data scientist having experience of around 1 - 4 years is $128,750 per year. If the data scientist is in a managerial position, the average salary rises upto $185,000 per year. Experienced data scientist salary –  The median salary for an experienced data scientist having experience of around 5 - 9 years is $128,750 per year whereas the median salary of an experienced manager is much higher; around $250,000 per year. Data Scientist Salary by Skills There are some core competencies that will help you to shine in your career as a Data Scientist, and if you want to get the edge over your peers you should consider polishing up these skills: Python is the most crucial and coveted skill which data scientists must be familiar with, followed by R. The average salary in the US for  Python programmers is $120,365 per annum. If you are well versed in both Data Science and Big Data, instead of just one among them, your salary is likely to increase by at least 25 percent . The users of innovative technology like the Statistical Analytical System get a salary of around $77,842. On the other hand, users of software analysis software like SPSS have a pay scale of  $61,452 per year. Machine Learning Engineers on the average earn around $111,855 per year. However, with more experience in Machine Learning along with knowledge in Python, you can earn around $146,085 per annum. A Data Scientist with domain knowledge of Artificial Intelligence can earn an annual salary between $100,000 to $150,000. Extra skills in programming and innovative technologies have always been a value-add that can enhance your employability. Pick skills that are in-demand to see your career graph soar. Data Scientist Salary by Companies Some of the highest paying companies in the field of Data Science are tech giants like Facebook, Amazon, Apple, and service companies like McGuireWoods, Netflix or Airbnb.  Below is a list of top companies with the highest paying salaries: McGuireWoods$165,114Amazon$164,114Airbnb$154,879Netflix$147,617 Apple$144,490Twitter$144,341Walmart$144,198Facebook$143,189eBay$143,005Salaries of Other Related Roles Various other job roles associated with Data Science are also equally exciting and rewarding. Let us look at some of them and their salaries: Machine Learning Engineer$114,826Machine Learning Scientist$114,121Applications Architect$113,757Enterprise Architect$110,663Data Architect$108,278Infrastructure Architect$107,309Business Intelligence Developer$81,514Statistician$76,884ConclusionLet us look at what we have learned in this article so far: What is Data Science? The job of a Data Scientist Pre-requisite skills for a Data Scientist Different job roles Key reasons for becoming a Data Scientist Salary depending upon different factors Salary of other related roles The field of Data Science is ripe in terms of opportunities for Data Scientists, Data Engineers, and Data Analysts. The figures mentioned in this article are not set in stone and may vary depending upon the skills you possess, experience you have and various other factors. With more experience and skills, your salary is bound to increase by a certain percentage every year. Data science is a field that will revolutionize the world in the coming years and you can have a share of this very lucrative pie with the right education qualifications, skills, experience and training.  
6954
Top Job Roles With Their Salary Data in the World ...

Data Science requires the expertise of professio... Read More

How To Become A Data Aanalyst In 2022?

In 2022, Data Analysis has become one of the core functions in any organization. This is a highly sought-after role that has evolved immensely in the past few years. But what is Data Analysis?  What do Data Analysts do? How to become a Data Analyst in 2022? What are the skills one needs to have to be a Data Analyst? There are many such questions which strike our mind when we talk about this profession.Let's walk through the answers to all the questions to ensure we have a clear picture in our mind.What is Data analytics?Information collected from different sources is used to make informed decisions for the organization, and is analyzed for some specific goals through Data Analysis. Data Analysis is not only used for research and analysis; but it helps organizations learn more about their customers, develop marketing strategies and optimize product development, just to name a few areas where it makes an impact.To be precise, there are four types of Data Analytics:Descriptive Analytics: - In this type of analytics, analysts examine the past data like monthly sales, monthly revenue, website traffic and more to find out the trend. They then draft a description or a summary of the performance of the firm or website. This type of analytics uses arithmetic operations, mean, median, max, percentage and other statistical summaries.Diagnostic Analytics:- As the name suggests, here we diagnose the data and find out the reasons behind any particular trend, issue or scenario.  If a company is faced with any negative data then this type of analysis will help them to find out the main reasons/causes for the decline in the performance, against which decisions and actions can be taken.Predictive Analytics:-  This type of analytics helps in predicting the future outcome by analyzing the past data and trends. It will help companies to take proactive actions for the better outcomes. Not just this, but predictive analysis also helps us forecast the sales, demand, fraud, failures and set our budgets and other resources accordingly.  Prescriptive Analytics:- This type of analytics helps in determining what action the company should take next in response to the situation, to keep the business going and growing.Why do we need Data Analysts?Organizations across different fields or sectors rely on data analysis to take important decisions for the development of a new product, to forecast sales for the near future, or find out about entry into new markets or new customers to target. Data analysis is also used to analyze the business performance based on the present data and find out various inefficiencies in the organizations. Not only industries or businesses use data analysis, but it is also used by different political parties and other groups to find out about opportunities as well as challenges.What does a data analyst do?There are several functions which an analyst performs, but some of the functions may also depends on the type of business and organization. Generally, a data analyst performs the following responsibilities:Data collection from various sources like primary sources and secondary sources and arranging the data in a proper sequence.Cleaning and processing the data as per requirement. A data analyst may be required to treat missing values, clean invalid or wrong data and remove unwanted information.Using different kind of statistical tools like R, Python, SPSS or SAS, to interpret the data collected.Adjusting the data according to the upcoming trends or changes like seasonal trends and then making interpretations.Preparing a data analysis report.Identifying opportunities and threats from the analyzed data and apprising the same to the organization.Now that you know what areas a Data Analyst works on, let us move to the skills and knowledge you would require to get started in this field.What are the skills necessary to be a Data Analyst?Broadly, a data analyst needs to have two type of skills at a broader level:Technical skills - Knowledge of different technical languages and tools like R, SQL, Microsoft Excel, Tableau, Mathematical skills, statistical skills and data visualization skills. These technical skills would help an analyst actually use the data and visualize the final outcome in a form that could be beneficial for the firm. This may include tables, graphs, charts, and more.   Decision making – This is extremely necessary to present the outcome and take the executives through the various changes, trends, demand, and downfall. Deep analysis is required to be able to take logical, factual and beneficial decisions for the firm. Data analysts must have the ability to think strategically and get a 360 degree view of the situation, before suggesting the way forward.After acquiring the above mentioned skills, it is very much required to keep yourself updated with the latest trends in the industry, so a mindset of continuous learning is a must.How to become a data analyst in 2022?The year 2022 changed all the definitions of a business and its processes. COVID-19 put companies across the world in a tailspin, forcing them to rethink their business strategies in order to cope with the evolving challenges thrown up by the pandemic. Some companies that were market leaders in their domain were unable to cope, and many had to even close down. The question therefore arises, in such an uncertain scenario, with challenges around every corner, is it even prudent to consider stepping into the role of a Data Analyst at this juncture?   The answer to this is, “YES”. This is the best time to be a data analyst because organizations everywhere are looking for expert Analysts who can guide them in making the right decisions, helping the organization to survive through the pandemic and beyond. Data analysts can perform detailed sales forecasting, or carry out a complete market analysis to make the right predictions for future growth. Companies need to prepare smart strategies for sales and marketing, to survive and thrive in the long run.If you want to shape your career in data analytics then You must have a degree in Mathematics, Economics, Engineering, Statistics or in a field which emphasizes on statistical and analytical skills. You must know some data analytics tools or skills which are mentioned above like R, SQL, Tableau, Data Warehousing, Data visualization, Data mining and advanced Microsoft Excel. You must consider some good certifications in the above-mentioned skills.   You may also consider a master’s degree in the field of data analytics.Let us now take you through the scope of Data Analysis in 2022.What is the scope of data analytics in 2022?The world is witnessing a surge in demand for data analytics services. According to a report, it is expected that there will be 250,000 new openings in the Data Analytics field in 2022, which is almost 60% higher than the demand in 2019-20. To stay ahead of the competition, organizations are employing Data Analysts and the demand for experts in the field is only set to rise. According to another report published in 2019, there were 150000 jobs which were vacant in the Data Analytics sector because of lack of available talent. This is a lucrative field, and those professionals who have expertise and experience can easily climb to the top in a short time. A report by IBM predicts that by 2022, Data Science and Analytics jobs would grow to nearly 350,000.What are the sectors in which Data Science jobs are expected to grow in India in 2022?Though the need for data analytics is growing across every sector, there are a few sectors that are more in-demand than others. These include:Aviation sector: uses data analysis for pricing and route optimization.Agriculture sector: analyses data to forecast the output and pricing.Cyber security: Global companies are adopting data engineering and data analysis for anomaly and intrusion detection.Genomics: Data analytics is used to study the sequence of genomes. It is heavily used to diagnose abnormalities and identify diseases.Conclusion If you would like to enter the field of Data Analytics, there’s no time like now! Data is useless without the right professional to analyze it. Leading companies leverage the power of analytics to improve their decision making and fuel business growth, and are always looking to employ bright and talented professionals with the capabilities they need.  Opportunities are plentiful and the rewards are immense, so take the first step and start honing all the skills that can make you fulfil your dream!
8925
How To Become A Data Aanalyst In 2022?

In 2022, Data Analysis has become one of the core ... Read More