Search

Machine learning Filter

Boosting and AdaBoost in Machine Learning

Ensemble learning is a strategy in which a group of models are used to find a solution to a challenging problem, by using a strategy and combining diverse machine learning models into one single predictive model.In general, ensemble methods are mainly used for improving the overall performance accuracy of a model and combine several different models, also known as the base learners, to predict the results, instead of using a single model.In one of the articles related to ensemble learning, we have already discussed about the popular ensemble method, Bootstrap Aggregation. Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. It combines Bootstrapping and Aggregation to form one ensemble model. It basically reduces the variance error and helps to avoid overfitting. In this article we will look into the limitations of bagging and how a boosting algorithm can be used to overcome those limitations. We will also learn about various types of boosting algorithms and implement one of them in Python. Let’s get started.What are the limitations of Bagging?Let us recall the concept of bagging and consider a binary classification problem. We are either classifying an observation as 0 or as 1.In bagging, T bootstrap samples are selected, a classifier is fitted on each of these samples, and the models are trained in parallel. In a Random Forest, decision trees are trained in parallel. Then the results of all classifiers are averaged into a bagging classifier:Formula for a Bagging ClassifierLet us consider 3 classifiers and the result for the classification can either be right or wrong. If we plot the results of the 3 classifiers, there are regions in which the classifiers will be wrong. These regions are represented in red in the figure below.Example case in which Bagging works wellThe above example works pretty well as when one classifier is wrong, the two others are correct. By voting classifier, you can achieve a better accuracy. However, there are cases where Bagging does not work properly, when all classifiers are mistaken to be in the same region.Due to this reason, the intuition behind the discovery of Boosting was the following :instead of training parallel models, one should train models sequentiallyeach model should focus on where the performance of the previous classifier was poorWith this intuition, Boosting algorithm was introduced. Let us understand what Boosting is all about.What is Boosting?Boosting is an ensemble modeling technique which attempts to build a strong classifier from the number of weak classifiers. It is done by building a model using weak models in series. First, a model is built from the training data. Then the second model is built which tries to correct the errors present in the first model. This procedure is continued and models are added until either the complete training data set is predicted correctly or the maximum number of models are added.Boosting being a sequential process, each subsequent model attempts to correct the errors of the previous model. It is focused on reducing the bias unlike bagging. It makes the boosting algorithms prone to overfitting. To avoid overfitting, parameter tuning plays an important role in boosting algorithms, which will be discussed in the later part of this article. Some examples of boosting are XGBoost, GBM, ADABOOST etc..How can boosting identify weak learners?To find weak learners, we apply base learning (ML) algorithms with a different distribution. As each time base learning algorithm is applied, it generates a new weak prediction rule. This is an iterative process. After many iterations, the boosting algorithm combines these weak rules into a single strong prediction rule.How do we choose a different distribution for each round?Step 1: The base learner takes all the distributions and assigns equal weight or attention to each observation.Step 2: If there is any prediction error caused by first base learning algorithm, then we pay higher attention to observations having prediction error. Then, we apply the next base learning algorithm.Step 3: Iterate Step 2 till the limit of base learning algorithm is reached or higher accuracy is achieved.Finally, it combines the outputs from weak learner and creates a strong learner which eventually improves the prediction power of the model. Boosting gives higher focus to examples which are mis-classified or have higher errors by preceding weak rules.How would you classify an email as SPAM or not?Our initial approach would be to identify ‘SPAM’ and ‘NOT SPAM’ emails using the following criteria. If: Email has only one image file (promotional image), It’s a SPAM.Email has only link(s), It’s a SPAM.Email body consists of sentences like “You won a prize money of $ xxxxxx”, It’s a SPAM.Email from our official domain “www.knowledgehut.com” , Not a SPAM.Email from known source, Not a SPAM.Individually, these rules are not powerful enough to classify an email into ‘SPAM’ or ‘NOT SPAM’. Therefore, these rules are called as weak learner.To convert weak learner to strong learner, we’ll combine the prediction of each weak learner using methods like:Using average/ weighted averageConsidering prediction has higher voteExample: Above, we have defined 5 weak learners. Out of these 5, 3 are voted as ‘SPAM’ and 2 are voted as ‘Not a SPAM’. In this case, by default, we’ll consider an email as SPAM because we have higher(3) vote for ‘SPAM’Boosting helps in training a series of low performing algorithms, called weak learners, simply by adjusting the error metric over time. Weak learners are considered to be those algorithms whose error rate is slightly under 50% as illustrated below:Weighted errorsLet us consider data points on a 2D plot. Some of the data points will be well classified, others won’t. The weight attributed to each error when computing the error rate is 1/n where n is the number of data points to classify.Now if we apply some weight to the errors :You might now notice that we give more weight to the data points that are not well classified. An illustration of the weighting process is mentioned below:Example of weighting processIn the end, we want to build a strong classifier that may look like the figure mentioned below:Strong ClassifierTree stumpsThere might be a question in your mind about how many classifiers should one implement in order to ensure it works well. And how is each classifier chosen at each step?Well, Tree stumps defines a 1-level decision tree. At each step, we need to find the best stump, i.e the best data split, which will minimize the overall error. You can see a stump as a test, in which the assumption is that everything that lies on one side belongs to class 1, and everything that lies on the other side belongs to class 0.Many such combinations are possible for a tree stump. Let us look into an example to understand how many combinations we face.3 data points to splitWell there are 12 possible combinations. Let us check how.12 StumpsThere are 12 possible “tests” we could make. The “2” on the side of each separating line simply represents the fact that all points on one side could be points that belong to class 0, or to class 1. Therefore, there are 2 tests embedded in it.At each iteration t, we will choose ht the weak classifier that splits best the data, by reducing the overall error rate the most. Recall that the error rate is a modified error rate version that takes into account what has been introduced before.Finding the best splitThe best split is found by identifying at each iteration t, the best weak classifier ht, generally a decision tree with 1 node and 2 leaves (a stump). Let us consider an example of credit defaulter, i.e whether a person who borrowed money will return or not.Identifying the best splitIn this case, the best split at time t is to stump on the Payment history, since the weighted error resulting from this split is minimum.Simply note that decision tree classifiers like these ones can in practice be deeper than a simple stump. This will be considered as a hyper-parameter.Combining classifiersIn the next step we combine the classifiers into a Sign classifier, and depending on which side of the frontier a point will stand, it is classified as 0 or 1. It can be achieved by:Combining classifiersYou can improve the classifier by adding weights on each classifier, to avoid giving the same importance to the different classifiers.AdaBoostPseudo-codePseudo-codeThe key elements to keep in mind are:Z is a constant whose role is to normalize the weights so that they add up to 1αt is a weight that we apply to each classifierThis algorithm is called AdaBoost or Adaptive Boosting. This is one of the most important algorithms among all boosting methods.ComputationBoosting algorithms are generally fast to train, although we consider every stump possible and compute exponentials recursively.Well, if we choose αt and Z properly, the weights that are supposed to change at each step simplify to:Weights after choice of α and ZTypes of Boosting AlgorithmsUnderlying engine used for boosting algorithms can be anything.  It can be decision stamp, margin-maximizing classification algorithm etc. There are many boosting algorithms which use other types of engines such as: AdaBoost (Adaptive Boosting)Gradient Tree BoostingXGBoostIn this article, we will focus on AdaBoost and Gradient Boosting followed by their respective Python codes and a little bit about XGBoost.Where are Boosted algorithms required?Boosted algorithms are mainly used when there is plenty of data to make a prediction and high predictive power is expected. It is used to reduce bias and variance in supervised learning. It combines multiple weak predictors to build strong predictor.The underlying engine used for boosting algorithms can be anything. For instance, AdaBoost is a boosting done on Decision stump. There are many other boosting algorithms which use other types of engine such as:GentleBoostGradient BoostingLPBoostBrownBoostAdaptive BoostingAdaptive Boosting, or most commonly known AdaBoost, is a Boosting algorithm. This algorithm uses the method to correct its predecessor. It pays more attention to under fitted training instances by the previous model. Thus, at every new predictor the focus is more on the complicated cases more than the others.It fits a sequence of weak learners on different weighted training data. It starts by predicting the original data set and gives equal weight to each observation. If prediction is incorrect using the first learner, then it gives higher weight to observation which have been predicted incorrectly. Being an iterative process, it continues to add learner(s) until a limit is reached in the number of models or accuracy.Mostly, AdaBoost uses decision stamps. But, we can use any machine learning algorithm as base learner if it accepts weight on training data set. We can use AdaBoost algorithms for both classification and regression problems.Let us consider the example of the image mentioned above. In order to build an AdaBoost classifier, consider that as a first base classifier a Decision Tree algorithm is trained to make predictions on our training data. Applying the following methodology of AdaBoost, the weight of the misclassified training instances is increased. Then the second classifier is trained and the updated weights are acknowledged. It repeats the procedure over and over again.At the end of every model prediction we end up boosting the weights of the misclassified instances so that the next model does a better job on them, and so on.This sequential learning technique might sound similar to Gradient Descent, except that instead of tweaking a single predictor’s parameter to minimize the cost function, AdaBoost adds predictors to the ensemble, gradually making it better.One disadvantage of this algorithm is that the model cannot be parallelized since each predictor can only be trained after the previous one has been trained and evaluated.Below are the steps for performing the AdaBoost algorithm:Initially, all observations are given equal weights.A model is built on a subset of data.Using this model, predictions are made on the whole dataset.Errors are calculated by comparing the predictions and actual values.While creating the next model, higher weights are given to the data points which were predicted incorrectly.Weights can be determined using the error value. For instance,the higher the error the more is the weight assigned to the observation.This process is repeated until the error function does not change, or the maximum limit of the number of estimators is reached.Hyperparametersbase_estimators: specify the base type estimator, i.e. the algorithm to be used as base learner.n_estimators: It defines the number of base estimators, where the default is 10 but you can increase it in order to obtain a better performance.learning_rate: same impact as in gradient descent algorithmmax_depth: Maximum depth of the individual estimatorn_jobs: indicates to the system how many processors it is allowed to use. Value of ‘-1’ means there is no limit;random_state: makes the model’s output replicable. It will always produce the same results when you give it a fixed value as well as the same parameters and training data.Now, let us take a quick look at how to use AdaBoost in Python using a simple example on handwritten digit recognition.import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_val_predict from sklearn.model_selection import train_test_split from sklearn.model_selection import learning_curve from sklearn.datasets import load_digitsLet us load the data :dataset = load_digits() X = dataset['data'] y = dataset['target']X contains arrays of length 64 which are simply flattened 8x8 images. The aim of this dataset is to recognize handwritten digits. Let’s take a look at a given handwritten digit:plt.imshow(X[4].reshape(8,8))If we stick to a Decision Tree Classifier of depth 1 (a stump), here’s how to implement AdaBoost classifier:reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1)) scores_ada = cross_val_score(reg_ada, X, y, cv=6) scores_ada.mean()0.2636257855582272And it should head a result of around 26%, which can largely be improved. One of the key parameters is the depth of the sequential decision tree classifiers. How does accuracy improve with depth of the decision trees?score = [] for depth in [1,2,10] : reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=depth)) scores_ada = cross_val_score(reg_ada, X, y, cv=6) score.append(scores_ada.mean()) score[0.2636257855582272, 0.5902852679072207, 0.9527524912410157]And the maximal score is reached for a depth of 10 in this simple example, with an accuracy of 95.3%.Gradient BoostingThis is another very popular Boosting algorithm which works pretty similar to what we’ve seen for AdaBoost. Gradient Boosting works by sequentially adding the previous predictors underfitted predictions to the ensemble, ensuring the errors made previously are corrected.The difference lies in what it does with the underfitted values of its predecessor. Contrary to AdaBoost, which tweaks the instance weights at every interaction, this method tries to fit the new predictor to the residual errors made by the previous predictor.So that you can understand Gradient Boosting it is important to understand Gradient Descent first.Below are the steps for performing the Gradient Boosting algorithm:A model is built on a subset of data.Using this model, predictions are made on the whole dataset.Errors are calculated by comparing the predictions and actual values.A new model is created using the errors calculated as target variable. Our objective is to find the best split to minimize the error.The predictions made by this new model are combined with the predictions of the previous.New errors are calculated using this predicted value and actual value.This process is repeated until the error function does not change, or the maximum limit of the number of estimators is reached.Hyperparametersn_estimators: It controls the number of weak learners.Learning_rate: Controls the contribution of weak learners in the final combination. There is a trade-off between learning_rate and n_estimators.min_samples_split: Minimum number of observation which is required in a node to be considered for splitting. It is used to control overfitting.min_samples_leaf: Minimum samples required in a terminal or leaf node. Lower values should be chosen for imbalanced class problems since the regions in which the minority class will be in the majority will be very small.min_weight_fraction_leaf: similar to the previous but defines a fraction of the total number of observations instead of an integer.max_depth : maximum depth of a tree. Used to control overfitting.max_lead_nodes : maximum number of terminal leaves in a tree. If this is defined max_depth is ignored.max_features : number of features it should consider while searching for the best split.You can tune loss function for better performance.Implementation in PythonYou can find Gradient Boosting function in Scikit-Learn’s library.# for regression from sklearn.ensemble import GradientBoostingRegressor model = GradientBoostingRegressor(n_estimators=3,learning_rate=1) model.fit(X,Y) # for classification from sklearn.ensemble import GradientBoostingClassifier model = GradientBoostingClassifier() model.fit(X,Y)XGBoostXG Boost or Extreme Gradient Boosting is an advanced implementation of the Gradient Boosting. This algorithm has high predictive power and is ten times faster than any other gradient boosting techniques. Moreover, it includes a variety of regularization which reduces overfitting and improves overall performance.AdvantagesIt implements regularization which helps in reducing overfit (Gradient Boosting does not have);It implements parallel processing which is much faster than Gradient Boosting;Allows users to define custom optimization objectives and evaluation criteria adding a whole new dimension to the model;XGBoost has an in-built routine to handle missing values;XGBoost makes splits up to the max_depth specified and then starts pruning the tree backwards and removes splits beyond which there is no positive gain;XGBoost allows a user to run a cross-validation at each iteration of the boosting process and thus it is easy to get the exact optimum number of boosting iterations in a single run.Boosting algorithms represent a different machine learning perspective which is turning a weak model to a stronger one to fix its weaknesses. I hope this article helped you understand how boosting works.We have covered most of the topics related to algorithms in our series of machine learning blogs, click here. If you are inspired by the opportunities provided by machine learning, enroll in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape.Your one-stop-shop for Machine Learning is just a click away. Access our live online training and find easy solutions to all your queries here.
Rated 4.5/5 based on 12 customer reviews

Boosting and AdaBoost in Machine Learning

8266
Boosting and AdaBoost in Machine Learning

Ensemble learning is a strategy in which a group of models are used to find a solution to a challenging problem, by using a strategy and combining diverse machine learning models into one single predictive model.

In general, ensemble methods are mainly used for improving the overall performance accuracy of a model and combine several different models, also known as the base learners, to predict the results, instead of using a single model.

In one of the articles related to ensemble learning, we have already discussed about the popular ensemble method, Bootstrap Aggregation. Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. It combines Bootstrapping and Aggregation to form one ensemble model. It basically reduces the variance error and helps to avoid overfitting. In this article we will look into the limitations of bagging and how a boosting algorithm can be used to overcome those limitations. We will also learn about various types of boosting algorithms and implement one of them in Python. Let’s get started.

What are the limitations of Bagging?

Let us recall the concept of bagging and consider a binary classification problem. We are either classifying an observation as 0 or as 1.

In bagging, T bootstrap samples are selected, a classifier is fitted on each of these samples, and the models are trained in parallel. In a Random Forest, decision trees are trained in parallel. Then the results of all classifiers are averaged into a bagging classifier:

What are the limitations of BaggingFormula for a Bagging ClassifierLet us consider 3 classifiers and the result for the classification can either be right or wrong. If we plot the results of the 3 classifiers, there are regions in which the classifiers will be wrong. These regions are represented in red in the figure below.
Example case in which Bagging works wellExample case in which Bagging works wellThe above example works pretty well as when one classifier is wrong, the two others are correct. By voting classifier, you can achieve a better accuracy. However, there are cases where Bagging does not work properly, when all classifiers are mistaken to be in the same region.

Bagging limitationsDue to this reason, the intuition behind the discovery of Boosting was the following :

  • instead of training parallel models, one should train models sequentially
  • each model should focus on where the performance of the previous classifier was poor

With this intuition, Boosting algorithm was introduced. Let us understand what Boosting is all about.

What is Boosting?

Boosting is an ensemble modeling technique which attempts to build a strong classifier from the number of weak classifiers. It is done by building a model using weak models in series. First, a model is built from the training data. Then the second model is built which tries to correct the errors present in the first model. This procedure is continued and models are added until either the complete training data set is predicted correctly or the maximum number of models are added.

Boosting being a sequential process, each subsequent model attempts to correct the errors of the previous model. It is focused on reducing the bias unlike bagging. It makes the boosting algorithms prone to overfitting. To avoid overfitting, parameter tuning plays an important role in boosting algorithms, which will be discussed in the later part of this article. Some examples of boosting are XGBoost, GBM, ADABOOST etc..

How can boosting identify weak learners?

To find weak learners, we apply base learning (ML) algorithms with a different distribution. As each time base learning algorithm is applied, it generates a new weak prediction rule. This is an iterative process. After many iterations, the boosting algorithm combines these weak rules into a single strong prediction rule.

How do we choose a different distribution for each round?

Step 1: The base learner takes all the distributions and assigns equal weight or attention to each observation.
Step 2: If there is any prediction error caused by first base learning algorithm, then we pay higher attention to observations having prediction error. Then, we apply the next base learning algorithm.
Step 3: Iterate Step 2 till the limit of base learning algorithm is reached or higher accuracy is achieved.

Finally, it combines the outputs from weak learner and creates a strong learner which eventually improves the prediction power of the model. Boosting gives higher focus to examples which are mis-classified or have higher errors by preceding weak rules.

How would you classify an email as SPAM or not?

Our initial approach would be to identify ‘SPAM’ and ‘NOT SPAM’ emails using the following criteria. If: 

  1. Email has only one image file (promotional image), It’s a SPAM.
  2. Email has only link(s), It’s a SPAM.
  3. Email body consists of sentences like “You won a prize money of $ xxxxxx”, It’s a SPAM.
  4. Email from our official domain “www.knowledgehut.com” , Not a SPAM.
  5. Email from known source, Not a SPAM.

Individually, these rules are not powerful enough to classify an email into ‘SPAM’ or ‘NOT SPAM’. Therefore, these rules are called as weak learner.

To convert weak learner to strong learner, we’ll combine the prediction of each weak learner using methods like:

  • Using average/ weighted average
  • Considering prediction has higher vote

Example: Above, we have defined 5 weak learners. Out of these 5, 3 are voted as ‘SPAM’ and 2 are voted as ‘Not a SPAM’. In this case, by default, we’ll consider an email as SPAM because we have higher(3) vote for ‘SPAM’

Boosting helps in training a series of low performing algorithms, called weak learners, simply by adjusting the error metric over time. Weak learners are considered to be those algorithms whose error rate is slightly under 50% as illustrated below:

Classifier error rate

Weighted errors

Let us consider data points on a 2D plot. Some of the data points will be well classified, others won’t. The weight attributed to each error when computing the error rate is 1/n where n is the number of data points to classify.

Now if we apply some weight to the errors :

Weighted errors

You might now notice that we give more weight to the data points that are not well classified. An illustration of the weighting process is mentioned below:

Example of weighting process

Example of weighting process
In the end, we want to build a strong classifier that may look like the figure mentioned below:

Strong Classifier
Strong Classifier
Tree stumps

There might be a question in your mind about how many classifiers should one implement in order to ensure it works well. And how is each classifier chosen at each step?

Well, Tree stumps defines a 1-level decision tree. At each step, we need to find the best stump, i.e the best data split, which will minimize the overall error. You can see a stump as a test, in which the assumption is that everything that lies on one side belongs to class 1, and everything that lies on the other side belongs to class 0.

Many such combinations are possible for a tree stump. Let us look into an example to understand how many combinations we face.

Tree stumps

3 data points to split
Well there are 12 possible combinations. Let us check how.

Tree Stumps

12 Stumps
There are 12 possible “tests” we could make. The “2” on the side of each separating line simply represents the fact that all points on one side could be points that belong to class 0, or to class 1. Therefore, there are 2 tests embedded in it.

At each iteration t, we will choose ht the weak classifier that splits best the data, by reducing the overall error rate the most. Recall that the error rate is a modified error rate version that takes into account what has been introduced before.

Finding the best split

The best split is found by identifying at each iteration t, the best weak classifier ht, generally a decision tree with 1 node and 2 leaves (a stump). Let us consider an example of credit defaulter, i.e whether a person who borrowed money will return or not.

Finding the best split

Identifying the best split
In this case, the best split at time t is to stump on the Payment history, since the weighted error resulting from this split is minimum.

Simply note that decision tree classifiers like these ones can in practice be deeper than a simple stump. This will be considered as a hyper-parameter.

Combining classifiers

In the next step we combine the classifiers into a Sign classifier, and depending on which side of the frontier a point will stand, it is classified as 0 or 1. It can be achieved by:

Combining classifiers

Combining classifiers
You can improve the classifier by adding weights on each classifier, to avoid giving the same importance to the different classifiers.

AdaBoost

AdaBoost
Pseudo-codePseudo-code
Pseudo-code
The key elements to keep in mind are:

  • Z is a constant whose role is to normalize the weights so that they add up to 1
  • αt is a weight that we apply to each classifier

This algorithm is called AdaBoost or Adaptive Boosting. This is one of the most important algorithms among all boosting methods.

Computation

Boosting algorithms are generally fast to train, although we consider every stump possible and compute exponentials recursively.

Well, if we choose αt and Z properly, the weights that are supposed to change at each step simplify to:

Weights after choice of α and Z

Weights after choice of α and Z

Types of Boosting Algorithms

Underlying engine used for boosting algorithms can be anything.  It can be decision stamp, margin-maximizing classification algorithm etc. There are many boosting algorithms which use other types of engines such as: 

  1. AdaBoost (Adaptive Boosting)
  2. Gradient Tree Boosting
  3. XGBoost

In this article, we will focus on AdaBoost and Gradient Boosting followed by their respective Python codes and a little bit about XGBoost.

Where are Boosted algorithms required?

Boosted algorithms are mainly used when there is plenty of data to make a prediction and high predictive power is expected. It is used to reduce bias and variance in supervised learning. It combines multiple weak predictors to build strong predictor.

The underlying engine used for boosting algorithms can be anything. For instance, AdaBoost is a boosting done on Decision stump. There are many other boosting algorithms which use other types of engine such as:

  1. GentleBoost
  2. Gradient Boosting
  3. LPBoost
  4. BrownBoost

Adaptive Boosting

Adaptive Boosting, or most commonly known AdaBoost, is a Boosting algorithm. This algorithm uses the method to correct its predecessor. It pays more attention to under fitted training instances by the previous model. Thus, at every new predictor the focus is more on the complicated cases more than the others.

It fits a sequence of weak learners on different weighted training data. It starts by predicting the original data set and gives equal weight to each observation. If prediction is incorrect using the first learner, then it gives higher weight to observation which have been predicted incorrectly. Being an iterative process, it continues to add learner(s) until a limit is reached in the number of models or accuracy.

Mostly, AdaBoost uses decision stamps. But, we can use any machine learning algorithm as base learner if it accepts weight on training data set. We can use AdaBoost algorithms for both classification and regression problems.

Let us consider the example of the image mentioned above. In order to build an AdaBoost classifier, consider that as a first base classifier a Decision Tree algorithm is trained to make predictions on our training data. Applying the following methodology of AdaBoost, the weight of the misclassified training instances is increased. Then the second classifier is trained and the updated weights are acknowledged. It repeats the procedure over and over again.

At the end of every model prediction we end up boosting the weights of the misclassified instances so that the next model does a better job on them, and so on.

This sequential learning technique might sound similar to Gradient Descent, except that instead of tweaking a single predictor’s parameter to minimize the cost function, AdaBoost adds predictors to the ensemble, gradually making it better.

One disadvantage of this algorithm is that the model cannot be parallelized since each predictor can only be trained after the previous one has been trained and evaluated.

Below are the steps for performing the AdaBoost algorithm:

  1. Initially, all observations are given equal weights.
  2. A model is built on a subset of data.
  3. Using this model, predictions are made on the whole dataset.
  4. Errors are calculated by comparing the predictions and actual values.
  5. While creating the next model, higher weights are given to the data points which were predicted incorrectly.
  6. Weights can be determined using the error value. For instance,the higher the error the more is the weight assigned to the observation.
  7. This process is repeated until the error function does not change, or the maximum limit of the number of estimators is reached.

Hyperparameters

base_estimators: specify the base type estimator, i.e. the algorithm to be used as base learner.

n_estimators: It defines the number of base estimators, where the default is 10 but you can increase it in order to obtain a better performance.

learning_rate: same impact as in gradient descent algorithm

max_depth: Maximum depth of the individual estimator

n_jobsindicates to the system how many processors it is allowed to use. Value of ‘-1’ means there is no limit;

random_state: makes the model’s output replicable. It will always produce the same results when you give it a fixed value as well as the same parameters and training data.

Now, let us take a quick look at how to use AdaBoost in Python using a simple example on handwritten digit recognition.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import train_test_split
from sklearn.model_selection import learning_curve
from sklearn.datasets import load_digits

Let us load the data :

dataset = load_digits()
X = dataset['data']
y = dataset['target']

X contains arrays of length 64 which are simply flattened 8x8 images. The aim of this dataset is to recognize handwritten digits. Let’s take a look at a given handwritten digit:

plt.imshow(X[4].reshape(8,8))

Hyperparameters

If we stick to a Decision Tree Classifier of depth 1 (a stump), here’s how to implement AdaBoost classifier:

reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1))
scores_ada = cross_val_score(reg_ada, X, y, cv=6)
scores_ada.mean()
0.2636257855582272

And it should head a result of around 26%, which can largely be improved. One of the key parameters is the depth of the sequential decision tree classifiers. How does accuracy improve with depth of the decision trees?

score = []
for depth in [1,2,10] :
reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=depth))
scores_ada = cross_val_score(reg_ada, X, y, cv=6)
score.append(scores_ada.mean())
score
[0.2636257855582272, 0.5902852679072207, 0.9527524912410157]

And the maximal score is reached for a depth of 10 in this simple example, with an accuracy of 95.3%.

Gradient Boosting

This is another very popular Boosting algorithm which works pretty similar to what we’ve seen for AdaBoost. Gradient Boosting works by sequentially adding the previous predictors underfitted predictions to the ensemble, ensuring the errors made previously are corrected.

The difference lies in what it does with the underfitted values of its predecessor. Contrary to AdaBoost, which tweaks the instance weights at every interaction, this method tries to fit the new predictor to the residual errors made by the previous predictor.

So that you can understand Gradient Boosting it is important to understand Gradient Descent first.

Below are the steps for performing the Gradient Boosting algorithm:

  1. A model is built on a subset of data.
  2. Using this model, predictions are made on the whole dataset.
  3. Errors are calculated by comparing the predictions and actual values.
  4. A new model is created using the errors calculated as target variable. Our objective is to find the best split to minimize the error.
  5. The predictions made by this new model are combined with the predictions of the previous.
  6. New errors are calculated using this predicted value and actual value.
  7. This process is repeated until the error function does not change, or the maximum limit of the number of estimators is reached.

Hyperparameters

n_estimators: It controls the number of weak learners.
Learning_rate: Controls the contribution of weak learners in the final combination. There is a trade-off between learning_rate and n_estimators.
min_samples_split: Minimum number of observation which is required in a node to be considered for splitting. It is used to control overfitting.
min_samples_leaf: Minimum samples required in a terminal or leaf node. Lower values should be chosen for imbalanced class problems since the regions in which the minority class will be in the majority will be very small.
min_weight_fraction_leaf: similar to the previous but defines a fraction of the total number of observations instead of an integer.
max_depth : maximum depth of a tree. Used to control overfitting.
max_lead_nodes : maximum number of terminal leaves in a tree. If this is defined max_depth is ignored.
max_features : number of features it should consider while searching for the best split.

You can tune loss function for better performance.

Implementation in Python

You can find Gradient Boosting function in Scikit-Learn’s library.

# for regression
from sklearn.ensemble import GradientBoostingRegressor
model = GradientBoostingRegressor(n_estimators=3,learning_rate=1)
model.fit(X,Y)
# for classification
from sklearn.ensemble import GradientBoostingClassifier
model = GradientBoostingClassifier()
model.fit(X,Y)

XGBoost

XG Boost or Extreme Gradient Boosting is an advanced implementation of the Gradient Boosting. This algorithm has high predictive power and is ten times faster than any other gradient boosting techniques. Moreover, it includes a variety of regularization which reduces overfitting and improves overall performance.

Advantages

  • It implements regularization which helps in reducing overfit (Gradient Boosting does not have);
  • It implements parallel processing which is much faster than Gradient Boosting;
  • Allows users to define custom optimization objectives and evaluation criteria adding a whole new dimension to the model;
  • XGBoost has an in-built routine to handle missing values;
  • XGBoost makes splits up to the max_depth specified and then starts pruning the tree backwards and removes splits beyond which there is no positive gain;
  • XGBoost allows a user to run a cross-validation at each iteration of the boosting process and thus it is easy to get the exact optimum number of boosting iterations in a single run.

Boosting algorithms represent a different machine learning perspective which is turning a weak model to a stronger one to fix its weaknesses. I hope this article helped you understand how boosting works.

We have covered most of the topics related to algorithms in our series of machine learning blogs, click here. If you are inspired by the opportunities provided by machine learning, enroll in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape.


Your one-stop-shop for Machine Learning is just a click away. Access our live online training and find easy solutions to all your queries here.

Priyankur

Priyankur Sarkar

Data Science Enthusiast

Priyankur Sarkar loves to play with data and get insightful results out of it, then turn those data insights and results in business growth. He is an electronics engineer with a versatile experience as an individual contributor and leading teams, and has actively worked towards building Machine Learning capabilities for organizations.

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

Data Science: Correlation vs Regression in Statistics

In this article, we will understand the key differences between correlation and regression, and their significance. Correlation and regression are two different types of analyses that are performed on multi-variate distributions of data. They are mathematical concepts that help in understanding the extent of the relation between two variables: and the nature of the relationship between the two variables respectively. Correlation Correlation, as the name suggests is a word formed by combining ‘co’ and ‘relation’. It refers to the analysis of the relationship that is established between two variables in a given dataset. It helps in understanding (or measuring) the linear relationship between two variables.  Two variables are said to be correlated when a change in the value of one variable results in a corresponding change in the value of the other variable. This could be a direct or an indirect change in the value of variables. This indicates a relationship between both the variables.  Correlation is a statistical measure that deals with the strength of the relation between the two variables in question.  Correlation can be a positive or negative value. Positive Correlation Two variables are considered to be positively correlated when the value of one variable increases or decreases following an increase or decrease in the value of the other variable respectively.  Let us understand this better with the help of an example: Suppose you start saving your money in a bank, and they offer some amount of interest on the amount you save in the bank. The more the amount you store in the bank, the more interest you get on your money. This way, the money stored in a bank and the interest obtained on it are positively correlated. Let us take another example: While investing in stocks, it is usually said that higher the risk while investing in a stock, higher is the rate of returns on such stocks.  This shows a direct inverse relationship between the two variables since both of them increase/decrease when the other variable increases/decreases respectively. Negative Correlation Two variables are considered to be negatively correlated when the value of one variable increases following a decrease in the value of the other variable. Let us understand this with an example: Suppose a person is looking to lose weight. The one basic idea behind weight loss is reducing the number of calorie intake. When fewer calories are consumed and a significant number of calories are burnt, the rate of weight loss is quicker. This means when the amount of junk food eaten is decreased, weight loss increases. Let us take another example: Suppose a popular non-essential product that is being sold faces an increase in the price. When this happens, the number of people who purchase it will reduce and the demand would also reduce. This means, when the popularity and price of the product increases, the demand for the product reduces. An inverse proportion relationship is observed between the two variables since one value increases and the other value decreases or one value decreases and the other value increases.  Zero Correlation This indicates that there is no relationship between two variables. It is also known as a zero correlation. This is when a change in one variable doesn't affect the other variable in any way. Let us understand this with the help of an example: When the increase in height of our friend/neighbour doesn’t affect our height, since our height is independent of our friend’s height.  Correlation is used when there is a requirement to see if the two variables that are being worked upon are related to each other, and if they are, what the extent of this relationship is, and whether the values are positively or negatively correlated.  Pearson’s correlation coefficient is a popular measure to understand the correlation between two values.  Regression Regression is the type of analysis that helps in the prediction of a dependant value when the value of the independent variable is given. For example, given a dataset that contains two variables (or columns, if visualized as a table), a few rows of values for both the variables would be given. One or more of one of the variables (or column) would be missing, that needs to be found out. One of the variables would depend on the other, thereby forming an equation that relevantly represents the relationship between the two variables. Regression helps in predicting the missing value. Note: The idea behind any regression technique is to ensure that the difference between the predicted and the actual value is minimal, thereby reducing the error that occurs during the prediction of the dependent variable with the help of the independent variable. There are different types of regression and some of them have been listed below: Linear Regression This is one of the basic kinds of regression, which usually involves two variables, where one variable is known as the ‘dependent’ variable and the other one is known as an ‘independent’ variable. Given a dataset, a pattern has to be formed (linear equation) with the help of these two variables and this equation has to be used to fit the given data to a straight line. This straight-line needs to be used to predict the value for a given variable. The predicted values are usually continuous. Logistic Regression There are different types of logistic regression:  Binary logistic regression is a regression technique wherein there are only two types or categories of input that are possible, i.e 0 or 1, yes or no, true or false and so on. Multinomial logistic regression helps predict output wherein the outcome would belong to one of the more than two classes or categories. In other words, this algorithm is used to predict a nominal dependent variable. Ordinal logistic regression deals with dependant variables that need to be ranked while predicting it with the help of independent variables.  Ridge Regression It is also known as L2 regularization. It is a regression technique that helps in finding the best coefficients for a linear regression model with the help of an estimator that is known as ridge estimator. It is used in contrast to the popular ordinary least square method since the former has low variance and hence it calculates better coefficients. It doesn’t eliminate coefficients thereby not producing sparse, simple models.  Lasso Regression LASSO is an acronym that stands for ‘Least Absolute Shrinkage and Selection Operator’. It is a type of linear regression that uses the concept of ‘shrinkage’. Shrinkage is a process with the help of which values in a data set are reduced/shrunk to a certain base point (this could be mean, median, etc). It helps in creating simple, easy to understand, sparse models, i.e the models that have fewer parameters to deal with, thereby being simple.  Lasso regression is highly suited for models that have high collinearity levels, i.e a model where certain processes (such as model selection or parameter selection or variable selection) is automated.  It is used to perform L1 and L2 regularization. L1 regularization is a technique that adds a penalty to the given values of coefficients in the equation. This also results in simple, easy to use, sparse models that would contain lesser coefficients. Some of these coefficients can also be estimated off to 0 and hence eliminated from the model altogether. This way, the model becomes simple.  It is said that Lasso regression is easier to work with and understand in comparison to ridge regression.  There are significant differences between both these statistical concepts.  Difference between Correlation and Regression Let us summarize the difference between correlation and regression with the help of a table: CorrelationRegressionThere are two variables, and their relationship is understood and measured.Two variables are represented as 'dependent' and 'independent' variables, and the dependent variable is predicted.The relationship between the two variables is analysed.This concept tells about how one variable affects the other and tries to predict the dependant variable.The relationship between two variables (say ‘x’ and ‘y’) is the same if it is expressed as ‘x is related to y’ or ‘y is related to x’.There is a significant difference when we say ‘x depends on y’ and ‘y depends on x’. This is because the independent and dependent variables change.Correlation between two variables can be expressed through a single point on a graph, visually.A line or a curve is fitted to the given data, and the line or the curve is extrapolated to predict the data and make sure the line or the curve fits the data on the graph.It is a numerical value that tells about the strength of the relation between two variables.It predicts one variable based on the independent variables. (this predicted value can be continuous or discrete, depending on the type of regression) by fitting a straight line to the data.Conclusion In this article, we understood the significant differences between two statistical techniques, namely- correlation and regression with the help of examples. Correlation establishes a relationship between two variables whereas regression deals with the prediction of values and curve fitting. 
Rated 4.0/5 based on 14 customer reviews
9825
Data Science: Correlation vs Regression in Statist...

In this article, we will understand the key differ... Read More

A Peak Into the World of Data Science

Touted as the sexiest job in the 21st century, back in 2012 by Harvard Business Review, the data science world has since received a lot of attention across the entire world, cutting across industries and fields. Many people wonder what the fuss is all about. At the same time, others have been venturing into this field and have found their calling.  Eight years later, the chatter about data science and data scientists continues to garner headlines and conversations. Especially with the current pandemic, suddenly data science is on everyone’s mind. But what does data science encompass? With the current advent of technology, there are terabytes upon terabytes of data that organizations collect daily. From tracking the websites we visit - how long, how often - to what we purchase and where we go - our digital footprint is an immense source of data for a lot of businesses. Between our laptops, smartphones and our tablets - almost everything we do translates into some form of data.  On its own, this raw data will be of no use to anyone. Data science is the process that repackages the data to generate insights and answer business questions for the organization. Using domain understanding, programming and analytical skills coupled together with business sense and know-how, existing data is converted to provide actionable insights for an organization to drive business growth. The processed data is what is worth its weight in gold. By using data science, we can uncover existing insights and behavioural patterns or even predict future trends.  Here is where our highly-sought-after data scientists come in.  A data scientist is a multifaceted role in an organization. They have a wide range of knowledge as they need to marry a plethora of methods, processes and algorithms with computer science, statistics and mathematics to process the data in a format that answers the critical business questions meaningfully and with actionable insights for the organization. With these actionable data, the company can make plans that will be the most profitable to drive their business goals.  To churn out the insights and knowledge that everyone needs these days, data science has become more of a craft than a science despite its name. The data scientists need to be trained in mathematics yet have some creative and business sense to find the answers they are looking in the giant haystack of raw data. They are the ones responsible for helping to shape future business plans and goals.  It sounds like a mighty hefty job, doesn’t it? It is also why it is one of the most sought after jobs these days. The field is rapidly evolving, and keeping up with the latest developments takes a lot of dedication and time, in order to produce actionable data that the organizations can use.  The only constant through this realm of change is the data science project lifecycle. We will discuss briefly below on the critical areas of the project lifecycle. The natural tendency is to envision that it is a circular process immediately - but there will be a lot of working back and forth within some phases to ensure that the project runs smoothly.  Stage One: Business Understanding  As a child, were you one of those children that always asked why? Even when the adults would give you an answer, you followed up with a “why”? Those children will have probably grown up to be data scientists as it seems, their favourite question is: Why? By asking the why - they will get to know the problem that needs to be solved and the critical question will emerge. Once there is a clear understanding of the business problem and question, then the work can begin. Data scientists want to ensure that the insights that come from this question are supported by data and will allow the business to achieve the desired results. Therefore, the foundation stone to any data science project is in understanding the business.  Stage Two: Data Understanding  Once the problem and question have been confirmed, you need to start laying out the objectives of this project by determining the required variables to be predicted. You must know what you need from the data and what the data should address. You must collate all the information and data, which can be reasonably difficult. An agreement over the sources and the requirements of the data characteristics needs to be reached before moving forward.  Through this process, an efficient and insightful understanding is required of how the data can and will be used for the project. This operational management of the data is vital, as the data that is sourced at this stage will define the project and how effective the solutions will be in the end.  Stage Three: Data Preparation  It has been said quite often that a bulk of a data scientist’s time is spent in preparing the data for use. In this report from CrowdFlower in 2016, the percentage of time spent on cleaning and organizing data is pegged at 60%. That is more than half their day!  Since data comes in various forms, and from a multitude of sources, there will be no standardization or consistency throughout the data. Raw data needs to be managed and prepared - with all the incomplete values and attributes fixed, and all deconflicting values in the data eliminated. This process requires human intervention as you must be able to discern which data values are required to reach your end goal. If the data is not prepared according to the business understanding, the final result might not be suitable to address the issue.  Stage Four: Modeling Once the tedious process of preparation is over, it is time to get the results that will be required for this project lifecycle. There are various types of techniques that can be used, ranging from decision-tree building to neural network generation. You must decide which would be the best technique based on the question that needs to be answered. If required, multiple modeling techniques can be used; where each task must be performed individually. Generally, modeling techniques are applied more than once (per process), and there will be more than one technique used per project.  With each technique, parameters must be set based on specific criteria. You, as the data scientist, must apply your knowledge to judge the success of the modeling and rank the models used based on the results; according to pre-set criteria. Stage Five: Evaluation Once the results are churned out and extracted, we then need to refer back to the business query that we talked about in Stage One and decide if it answers the question raised; and if the model and data meet the objectives that the data science project has set out to address. The evaluation also can unveil other results that are not related to the business question but are good points for future direction or challenges that the organization might face. These results should be tabled for discussion and used for new data science projects. Final Stage: Deployment  This is almost the finishing line!  Now with the evaluated results, the team would need to sit down and have an in-depth discussion on what the data shows and what the business needs to do based on the data. The project team should come up with a suitable plan for deployment to address the issue. The deployment will still need to be monitored and assessed along the way to ensure that the project will be a successful one; backed by data.  The assessment would normally restart the project lifecycle; bringing you full circle.  Data is everywhere  In this day and age, we are surrounded by a multitude of data science applications as it crosses all industries. We will focus on these five industries, where data science is making waves. Banking & Finance  Financial institutions were the earliest adopters of data analytics, and they are all about data! From using data for fraud or anomaly detection in their banking transactions to risk analytics and algorithmic trading - one will find data plays a key role in all levels of a financial institution.  Risk analytics is one of the key areas where data science is used; as financial institutions depend on it to make strategic decisions for the financial health of the business. They need to assess each risk to manage and optimize their cost.  Logistics & Transportation  The world of logistics is a complex one. In a production line, raw materials sometimes come from all over the world to create a single product. A delay of any of the parts will affect the production line, and the output of stock will be affected drastically. If logistical delays can be predicted, the company can adjust quickly to another alternative to ensure that there will be no gap in the supply chain, ensuring that the production line will function at optimum efficiency.  Healthcare  2020 has been an interesting one. It has been a battle of a lifetime for many of us. Months have passed, and yet the virus still rages on to wreak havoc on lives and economies. Many countries have turned to data science applications to help with their fight against COVID-19. With so much data generated daily, people and governments need to know various things such as:  Epidemiological clusters so people can be quarantined to stop the spread of the virus tracking of symptoms over thousands of patients to understand how the virus transmits and mutates to find vaccines and  solutions to mitigate transmission. Manufacturing  In this field, millions can be on the line each day as there are so many moving parts that can cause delays, production issues, etc. Data science is primarily used to boost production rates, reduce cost (workforce or energy), predict maintenance and reduce risks on the production floor.  This allows the manufacturer to make plans to ensure that the production line is always operating at the optimum level, providing the best output at any given time.  Retail (Brick & Mortar, Online)  Have you ever wondered why some products in a shop are placed next to each other or how discounts on items work? All those are based on data science.  The retailers track people’s shopping routes, purchases and basket matching to work out details like where products should be placed; or what should go on sale and when to drive up the sales for an item. And that is just for the instore purchases.  Online data tracks what you are buying and suggests what you might want to buy next based on past purchase histories; or even tells you what you might want to add to your cart. That’s how your online supermarket suggests you buy bread if you have a jar of peanut butter already in your cart.  As a data scientist, you must always remember that the power in the data. You need to understand how the data can be used to find the desired results for your organization. The right questions must be asked, and it has become more of an art than a science. 
Rated 4.0/5 based on 10 customer reviews
9825
A Peak Into the World of Data Science

Touted as the sexiest job in the 21st century, ba... Read More

Future Proof Your Career With Data Skills

Data is everywhere, and we have all seen exponential growth in the data that is generated daily. Information must be extracted from this data to make sense of it, and we must gain insights from this information that will help us to understand repeating patterns. Analysing these patterns will help us to know more about consumers and their behaviour, hence provide services and manufacture products that will benefit both the organization as well as the consumers. This is where Data Science comes into the picture. The art of analysing the data, extracting patterns, applying algorithms, tweaking the data to suit our requirements, and more – are all parts of data science. The field has seen massive growth in the last few years and this growth may not stop for the next 5 years at least. Some people are apprehensive about the future, but the opportunities in the field of data science are improving day by day, creating new paths for people to get in and contribute their expertise/experiences. What is Data Science? As mentioned previously, data is generated in large amounts daily. This data generation happens everywhere, ranging from a small organization to multi-national companies. This kind of data is also known as 'big data', given their specific characteristics in terms of volume, type of data and speed at which the data gets generated. It is important to make use of this big data by processing it into something useful so that the organizations can use advanced analytics and insights to their advantage (generating better profits, more customer-reach, and so on). Who is a Data Scientist? A data scientist is a person who is trained and experienced in working with data, i.e. data gathering, data cleaning, data preparation, data transformation, and data analysis. These steps will help understand the data, extract hidden patterns and put forward insights about the data. All these processes are done with the help of algorithms which are specially designed to perform a specific task. Many analyses have revealed that Data Scientist, Machine Learning Engineer, Artificial Intelligence Engineer are some of the most sought-after jobs. Not to forget the high pay that comes with it. Data science is an intricate combination of mathematics, statistics, analytics, and computer science. Mathematics & statistics are required to understand the ideas behind the algorithms and their working. On the other hand, analytics is associated with many data cleaning, transformation, preparation and analytics operations that are performed on the data with the help of computer science (programming languages). All these skills (which a data scientist possesses) will help the businesses to thrive. Data scientists are usually those who are able to find out why things work the way they do, why they don’t work as expected, what has gone wrong in the business and how it can be fixed. All these are different processes in the world of data analytics. They would also have to interact with potential stakeholders, discuss business challenges and help improve it.  What would a day in the life of a Data Scientist look like? If the general idea of stand-up meetings and sprint meetings is not taken into consideration, a day in the life of a data scientist would revolve around gathering data, understanding it, talking to relevant people about the data, asking questions about it, reiterating the requirement and the end product, and working on how it can be achieved. It looks like this: Data collection This part deals with the collection of raw data from various resources. These resources include websites, various social media platforms, people’s profiles, and so on. All this data needs to be collected and stored in a place which is easy to access while working with the data. Data cleaning This is considered as one of the most important steps in data science. This is because good data yields great results, whereas noisy, unclean, missing and redundant data yields unsatisfactory results. Once raw data has been collected, it needs to be accessed and cleaned by various methods. Redundant rows or columns have to be deleted, missing data either needs to be filled or deleted, irrelevant columns have to be eliminated, and so on. Data transformations In this step, the data (which is usually in the form of row x column) is converted into a format that is required by the algorithm to process upon. For example- a text analysis task may require data in the form of text whereas a prediction or regression problem may require data in the form of a table, i.e. rows and columns. Based on the requirement and the end product, data has to be transformed into the respective formats. Using statistics, machine learning algorithms to solve the problem and extract insights The basics of statistics are considered to be a foundation while working as a data scientist. Understanding distributions, priors probabilities, posteriors probabilities, Bayesian theorem serve as a foundation while working with the data. The data needs to be interpreted and mangled with the help of statistics. It helps understand and solve problems by helping the data scientist extract meaningful and relevant insights/patterns from data.  Up-to-date sector knowledge It is important to stay up-to-date, know the new trends, packages, frameworks, new releases and changes that occur frequently if not on a daily basis. It is important to adapt and use whatever revolutionary technology comes our way and seems to be helpful in the specific scenario. It is essential to stay on top by knowing new algorithms, techniques, data mining algorithms, and so on.  It is important to keep learning, revising your career plan and update the skills that are necessary for the current world. Updating knowledge is vital to pursue future opportunities and make sure that your career path is aligned with your personal interests.  What about the salary? Salaries for Data Scientists are on the higher end of the spectrum, with a mean salary of about £60,000. With experience and constantly upgrading the skills, the salary can go up to £100,000 too. This also depends on the organization. If it is a start-up or a new company, they might not pay as much, but as and when the company grows, the pay-out increases. On similar lines, experience increases along with skills, which would make a data scientist more valuable to the organization.  Analytics has revealed that the number of data science-related jobs will see a surge and it has also been labelled as the 'sexiest job of the 21st century’. The demand is growing steadily. Almost every organization wishes to have a machine learning wing where data scientists would be much needed.  What are the pre-requisites to becoming a Data Scientist? If it is one of the companies from FAANG – Facebook, Amazon, Apple, Netflix, Google, it is not really a requirement to have a bachelor's degree or anything. There might be certain positions that require specific qualifications, but entry-level positions don't usually have any specific requirement of a degree. These companies certainly expect the data scientists to be hands-on in one or two programming languages (object-oriented such as C++ or Java, and Python). They might also require knowing specific frameworks (TensorFlow, Keras), deep learning algorithms (Neural networks, convolutional neural networks, recurrent neural networks), NumPy, Pandas and so on.  Machine learning is a concept which data scientists will have to be familiar with. This doesn’t mean just the definitions. It involves understanding the algorithms, the mathematical working behind it, the kind of results it would provide, the kind of cases where certain algorithms can be used and how the output can be improved by tweaking certain parameters present in the algorithm.  It is also essential to understand where Machine Learning can be used, and how it plays an important role in understanding the data, and prediction as well.  It never harms to get a bachelor’s degree, a master’s degree and a PhD. All these degrees add formidable value to the knowledge already gained.  How do I start on my Data Science Journey? Any job requires a resume where the relevant skills are mentioned in the right way and format. It is important to present yourself in the right way, and also exhibit the enthusiasm of learning and being updated. An entry-level data scientist job will require the basics of object-oriented programming, Python, scientific computing packages, basics of machine learning, statistics, analytics and hands-on programming abilities. Taking up certain foundational courses, working hands-on in projects, internships and group projects also help in providing a considerable amount of experience around working with data. It is also important to have a niche for data, be able to play around with data, extract patterns, have an eye out for insights, packages that could be used, the approach and so on.  Conclusion Technology will create new jobs, but Data Science and Artificial Intelligence will be a major part of our life in the upcoming years. This means some jobs may be lost too (because many processes which seem too trivial would be automated) but think about the new jobs that would be created! Instead of worrying about the jobs that would be lost due to AI replacing their work, it is essential to foresee and adapt to it. We have seen technologies revolutionizing the current world, and all this has happened because of ‘change’, because of how people have foreseen the circumstances and have adapted to it. For example, AI will not replace a doctor. But AI can replace a doctor without AI knowledge. We need to be up-to-date with the current trends, the technologies and the new and ever-changing requirements of the real world. The focus should be on learning to work with machines, not outwork them. 
Rated 4.0/5 based on 10 customer reviews
10983
Future Proof Your Career With Data Skills

Data is everywhere, and we have all seen exponenti... Read More