Search

Machine learning Filter

Bagging and Random Forest in Machine Learning

In today’s world, innovations happen on a daily basis, rendering all the previous versions of that product, service or skill-set outdated and obsolete. In such a dynamic and chaotic space, how can we make an informed decision without getting carried away by plain hype? To make the right decisions, we must follow a set of processes; investigate the current scenario, chart down your expectations, collect reviews from others, explore your options, select the best solution after weighing the pros and cons, make a decision and take the requisite action. For example, if you are looking to purchase a computer, will you simply walk up to the store and pick any laptop or notebook? It’s highly unlikely that you would do so. You would probably search on Amazon, browse a few web portals where people have posted their reviews and compare different models, checking for their features, specifications and prices. You will also probably ask your friends and colleagues for their opinion. In short, you would not directly jump to a conclusion, but will instead make a decision considering the opinions and reviews of other people as well. Ensemble models in machine learning also operate on a similar manner. They combine the decisions from multiple models to improve the overall performance. The objective of this article is to introduce the concept of ensemble learning and understand algorithms like bagging and random forest which use a similar technique. What is Ensemble Learning? Ensemble methods aim at improving the predictive performance of a given statistical learning or model fitting technique. The general principle of ensemble methods is to construct a linear combination of some model fitting method, instead of using a single fit of the method. An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. Ensemble methods combine several decision trees classifiers to produce better predictive performance than a single decision tree classifier. The main principle behind the ensemble model is that a group of weak learners come together to form a strong learner, thus increasing the accuracy of the model.When we try to predict the target variable using any machine learning technique, the main causes of difference in actual and predicted values are noise, variance, and bias. Ensemble helps to reduce these factors (except noise, which is irreducible error). The noise-related error is mainly due to noise in the training data and can't be removed. However, the errors due to bias and variance can be reduced.The total error can be expressed as follows: Total Error = Bias + Variance + Irreducible Error A measure such as mean square error (MSE) captures all of these errors for a continuous target variable and can be represented as follows: Where, E stands for the expected mean, Y represents the actual target values and fˆ(x) is the predicted values for the target variable. It can be broken down into its components such as bias, variance and noise as shown in the following formula: Using techniques like Bagging and Boosting helps to decrease the variance and increase the robustness of the model. Combinations of multiple classifiers decrease variance, especially in the case of unstable classifiers, and may produce a more reliable classification than a single classifier. Ensemble Algorithm The goal of ensemble algorithms is to combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator. There are two families of ensemble methods which are usually distinguished: Averaging methods. The driving principle is to build several estimators independently and then to average their predictions. On average, the combined estimator is usually better than any of the single base estimator because its variance is reduced.|Examples: Bagging methods, Forests of randomized trees. Boosting methods. Base estimators are built sequentially and one tries to reduce the bias of the combined estimator. The motivation is to combine several weak models to produce a powerful ensemble.Examples: AdaBoost, Gradient Tree Boosting.Advantages of Ensemble Algorithm Ensemble is a proven method for improving the accuracy of the model and works in most of the cases. Ensemble makes the model more robust and stable thus ensuring decent performance on the test cases in most scenarios. You can use ensemble to capture linear and simple as well nonlinear complex relationships in the data. This can be done by using two different models and forming an ensemble of two. Disadvantages of Ensemble Algorithm Ensemble reduces the model interpret-ability and makes it very difficult to draw any crucial business insights at the end It is time-consuming and thus might not be the best idea for real-time applications The selection of models for creating an ensemble is an art which is really hard to master Basic Ensemble Techniques Max Voting: Max-voting is one of the simplest ways of combining predictions from multiple machine learning algorithms. Each base model makes a prediction and votes for each sample. The sample class with the highest votes is considered in the final predictive class. It is mainly used for classification problems.  Averaging: Averaging can be used while estimating the probabilities in classification tasks. But it is usually used for regression problems. Predictions are extracted from multiple models and an average of the predictions are used to make the final prediction. Weighted Average: Like averaging, weighted averaging is also used for regression tasks. Alternatively, it can be used while estimating probabilities in classification problems. Base learners are assigned different weights, which represent the importance of each model in the prediction. Ensemble Methods Ensemble methods became popular as a relatively simple device to improve the predictive performance of a base procedure. There are different reasons for this: the bagging procedure turns out to be a variance reduction scheme, at least for some base procedures. On the other hand, boosting methods are primarily reducing the (model) bias of the base procedure. This already indicates that bagging and boosting are very different ensemble methods. From the perspective of prediction, random forests is about as good as boosting, and often better than bagging.  Bootstrap Aggregation or Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. It combines Bootstrapping and Aggregation to form one ensemble model Reduces the variance error and helps to avoid overfitting Bagging algorithms include: Bagging meta-estimator Random forest Boosting refers to a family of algorithms which converts weak learner to strong learners. Boosting is a sequential process, where each subsequent model attempts to correct the errors of the previous model. Boosting is focused on reducing the bias. It makes the boosting algorithms prone to overfitting. To avoid overfitting, parameter tuning plays an important role in boosting algorithms. Some examples of boosting are mentioned below: AdaBoost GBM XGBM Light GBM CatBoost Why use ensemble models? Ensemble models help in improving algorithm accuracy as well as the robustness of a model. Both Bagging and Boosting should be known by data scientists and machine learning engineers and especially people who are planning to attend data science/machine learning interviews. Ensemble learning uses hundreds to thousands of models of the same algorithm and then work hand in hand to find the correct classification. You may also consider the fable of the blind men and the elephant to understand ensemble learning, where each blind man found a feature of the elephant and they all thought it was something different. However, if they would work together and discussed among themselves, they might have figured out what it is. Using techniques like bagging and boosting leads to increased robustness of statistical models and decreased variance. Now the question becomes, between these different “B” words. Which is better? Which is better, Bagging or Boosting? There is no perfectly correct answer to that. It depends on the data, the simulation and the circumstances. Bagging and Boosting decrease the variance of your single estimate as they combine several estimates from different models. So the result may be a model with higher stability. If the problem is that the single model gets a very low performance, Bagging will rarely get a better bias. However, Boosting could generate a combined model with lower errors as it optimizes the advantages and reduces pitfalls of the single model. By contrast, if the difficulty of the single model is overfitting, then Bagging is the best option. Boosting for its part doesn’t help to avoid over-fitting; in fact, this technique is faced with this problem itself. For this reason, Bagging is effective more often than Boosting. In this article we will discuss about Bagging, we will cover Boosting in the next post. But first, let us look into the very important concept of bootstrapping. Bootstrap Sampling Sampling is the process of selecting a subset of observations from the population with the purpose of estimating some parameters about the whole population. Resampling methods, on the other hand, are used to improve the estimates of the population parameters. In machine learning, the bootstrap method refers to random sampling with replacement. This sample is referred to as a resample. This allows the model or algorithm to get a better understanding of the various biases, variances and features that exist in the resample. Taking a sample of the data allows the resample to contain different characteristics then it might have contained as a whole. This is demonstrated in figure 1 where each sample population has different pieces, and none are identical. This would then affect the overall mean, standard deviation and other descriptive metrics of a data set. In turn, it can develop more robust models. Bootstrapping is also great for small size data sets that can have a tendency to overfit. In fact, we recommended this to one company who was concerned because their data sets were far from “Big Data”. Bootstrapping can be a solution in this case because algorithms that utilize bootstrapping can be more robust and handle new data sets depending on the methodology chosen(boosting or bagging). The reason behind using the bootstrap method is because it can test the stability of a solution. By using multiple sample data sets and then testing multiple models, it can increase robustness. Perhaps one sample data set has a larger mean than another, or a different standard deviation. This might break a model that was overfit, and not tested using data sets with different variations. One of the many reasons bootstrapping has become very common is because of the increase in computing power. This allows for many times more permutations to be done with different resamples than previously. Bootstrapping is used in both Bagging and Boosting Let us assume we have a sample of ‘n’ values (x) and we’d like to get an estimate of the mean of the sample. mean(x) = 1/n * sum(x) Consider a sample of 100 values (x) and we’d like to get an estimate of the mean of the sample. We can calculate the mean directly from the sample as: We know that our sample is small and that the mean has an error in it. We can improve the estimate of our mean using the bootstrap procedure: Create many (e.g. 1000) random sub-samples of the data set with replacement (meaning we can select the same value multiple times). Calculate the mean of each sub-sample Calculate the average of all of our collected means and use that as our estimated mean for the data Example: Suppose we used 3 re-samples and got the mean values 2.3, 4.5 and 3.3. Taking the average of these we could take the estimated mean of the data to be 3.367. This process can be used to estimate other quantities like the standard deviation and even quantities used in machine learning algorithms, like learned coefficients. While using Python, we do not have to implement the bootstrap method manually. The scikit-learn library provides an implementation that creates a single bootstrap sample of a dataset. The resample() scikit-learn function can be used for sampling. It takes as arguments the data array, whether or not to sample with replacement, the size of the sample, and the seed for the pseudorandom number generator used prior to the sampling. For example, let us create a bootstrap that creates a sample with replacement with 4 observations and uses a value of 1 for the pseudorandom number generator. boot = resample(data, replace=True, n_samples=4, random_state=1)As the bootstrap API does not allow to easily gather the out-of-bag observations that could be used as a test set to evaluate a fit model, in the univariate case we can gather the out-of-bag observations using a simple Python list comprehension. # out of bag observations  oob = [x for x in data if x not in boot]Let us look at a small example and execute it.# scikit-learn bootstrap  from sklearn.utils import resample  # data sample  data = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]  # prepare bootstrap sample  boot = resample(data, replace=True, n_samples=4, random_state=1)  print('Bootstrap Sample: %s' % boot)  # out of bag observations  oob = [x for x in data if x not in boot]  print('OOB Sample: %s' % oob) The output will include the observations in the bootstrap sample and those observations in the out-of-bag sample.Bootstrap Sample: [0.6, 0.4, 0.5, 0.1]  OOB Sample: [0.2, 0.3]Bagging Bootstrap Aggregation, also known as Bagging, is a powerful ensemble method that was proposed by Leo Breiman in 1994 to prevent overfitting. The concept behind bagging is to combine the predictions of several base learners to create a more accurate output. Bagging is the application of the Bootstrap procedure to a high-variance machine learning algorithm, typically decision trees. Suppose there are N observations and M features. A sample from observation is selected randomly with replacement (Bootstrapping). A subset of features are selected to create a model with sample of observations and subset of features. Feature from the subset is selected which gives the best split on the training data. This is repeated to create many models and every model is trained in parallel Prediction is given based on the aggregation of predictions from all the models. This approach can be used with machine learning algorithms that have a high variance, such as decision trees. A separate model is trained on each bootstrap sample of data and the average output of those models used to make predictions. This technique is called bootstrap aggregation or bagging for short. Variance means that an algorithm’s performance is sensitive to the training data, with high variance suggesting that the more the training data is changed, the more the performance of the algorithm will vary. The performance of high variance machine learning algorithms like unpruned decision trees can be improved by training many trees and taking the average of their predictions. Results are often better than a single decision tree. What Bagging does is help reduce variance from models that are might be very accurate, but only on the data they were trained on. This is also known as overfitting. Overfitting is when a function fits the data too well. Typically this is because the actual equation is much too complicated to take into account each data point and outlier. Bagging gets around this by creating its own variance amongst the data by sampling and replacing data while it tests multiple hypothesis(models). In turn, this reduces the noise by utilizing multiple samples that would most likely be made up of data with various attributes(median, average, etc). Once each model has developed a hypothesis. The models use voting for classification or averaging for regression. This is where the “Aggregating” in “Bootstrap Aggregating” comes into play. Each hypothesis has the same weight as all the others. When we later discuss boosting, this is one of the places the two methodologies differ. Essentially, all these models run at the same time, and vote on which hypothesis is the most accurate. This helps to decrease variance i.e. reduce the overfit. Advantages Bagging takes advantage of ensemble learning wherein multiple weak learners outperform a single strong learner.  It helps reduce variance and thus helps us avoid overfitting. Disadvantages There is loss of interpretability of the model. There can possibly be a problem of high bias if not modeled properly. While bagging gives us more accuracy, it is computationally expensive and may not be desirable depending on the use case. There are many bagging algorithms of which perhaps the most prominent would be Random Forest.  Decision Trees Decision trees are simple but intuitive models. Using a top-down approach, a root node creates binary splits unless a particular criteria is fulfilled. This binary splitting of nodes results in a predicted value on the basis of the interior nodes which lead to the terminal or the final nodes. For a classification problem, a decision tree will output a predicted target class for each terminal node produced. We have covered decision tree algorithm  in detail for both classification and regression in another article. Limitations to Decision Trees Decision trees tend to have high variance when they utilize different training and test sets of the same data, since they tend to overfit on training data. This leads to poor performance when new and unseen data is added. This limits the usage of decision trees in predictive modeling. However, using ensemble methods, models that utilize decision trees can be created as a foundation for producing powerful results. Bootstrap Aggregating Trees We have already discussed about bootstrap aggregating (or bagging), we can create an ensemble (forest) of trees where multiple training sets are generated with replacement, meaning data instances. Once the training sets are created, a CART model can be trained on each subsample. Features of Bagged Trees Reduces variance by averaging the ensemble's results. The resulting model uses the entire feature space when considering node splits. Bagging trees allow the trees to grow without pruning, reducing the tree-depth sizes and resulting in high variance but lower bias, which can help improve predictive power. Limitations to Bagging Trees The main limitation of bagging trees is that it uses the entire feature space when creating splits in the trees. Suppose some variables within the feature space are indicating certain predictions, there is a risk of having a forest of correlated trees, which actually  increases bias and reduces variance. Why a Forest is better than One Tree?The main objective of a machine learning model is to generalize properly to new and unseen data. When we have a flexible model, overfitting takes place. A flexible model is said to have high variance because the learned parameters (such as the structure of the decision tree) will vary with the training data. On the other hand, an inflexible model is said to have high bias as it makes assumptions about the training data. An inflexible model may not have the capacity to fit even the training data and in both cases — high variance and high bias — the model is not able to generalize new and unseen data properly. You can through the article on one of the foundational concepts in machine learning, bias-variance tradeoff which will help you understand that the balance between creating a model that is so flexible memorizes the training data and an inflexible model cannot learn the training data.  The main reason why decision tree is prone to overfitting when we do not limit the maximum depth is because it has unlimited flexibility, which means it keeps growing unless there is one leaf node for every single observation. Instead of limiting the depth of the tree which results in reduced variance and increase in bias, we can combine many decision trees into a single ensemble model known as the random forest. What is Random Forest algorithm? Random forest is like bootstrapping algorithm with Decision tree (CART) model. Suppose we have 1000 observations in the complete population with 10 variables. Random forest will try to build multiple CART along with different samples and different initial variables. It will take a random sample of 100 observations and then chose 5 initial variables randomly to build a CART model. It will go on repeating the process say about 10 times and then make a final prediction on each of the observations. Final prediction is a function of each prediction. This final prediction can simply be the mean of each prediction. The random forest is a model made up of many decision trees. Rather than just simply averaging the prediction of trees (which we could call a “forest”), this model uses two key concepts that gives it the name random: Random sampling of training data points when building trees Random subsets of features considered when splitting nodes How the Random Forest Algorithm Works The basic steps involved in performing the random forest algorithm are mentioned below: Pick N random records from the dataset. Build a decision tree based on these N records. Choose the number of trees you want in your algorithm and repeat steps 1 and 2. In case of a regression problem, for a new record, each tree in the forest predicts a value for Y (output). The final value can be calculated by taking the average of all the values predicted by all the trees in the forest. Or, in the case of a classification problem, each tree in the forest predicts the category to which the new record belongs. Finally, the new record is assigned to the category that wins the majority vote. Using Random Forest for Regression Here we have a problem where we have to predict the gas consumption (in millions of gallons) in 48 US states based on petrol tax (in cents), per capita income (dollars), paved highways (in miles) and the proportion of population with the driving license. We will use the random forest algorithm via the Scikit-Learn Python library to solve this regression problem. First we import the necessary libraries and our dataset. import pandas as pd  import numpy as np  dataset = pd.read_csv('/content/petrol_consumption.csv')  dataset.head() Petrol_taxAverage_incomepaved_HighwaysPopulation_Driver_licence(%)Petrol_Consumption09.0357119760.52554119.0409212500.57252429.0386515860.58056137.5487023510.52941448.043994310.544410You will notice that the values in our dataset are not very well scaled. Let us scale them down before training the algorithm. Preparing Data For Training We will perform two tasks in order to prepare the data. Firstly we will divide the data into ‘attributes’ and ‘label’ sets. The resultant will then be divided into training and test sets. X = dataset.iloc[:, 0:4].values  y = dataset.iloc[:, 4].valuesNow let us divide the data into training and testing sets:from sklearn.model_selection import train_test_split  X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)Feature Scaling The dataset is not yet a scaled value as you will see that the Average_Income field has values in the range of thousands while Petrol_tax has values in the range of tens. It will be better if we scale our data. We will use Scikit-Learn's StandardScaler class to do the same. # Feature Scaling  from sklearn.preprocessing import StandardScaler  sc = StandardScaler()  X_train = sc.fit_transform(X_train)  X_test = sc.transform(X_test)Training the Algorithm Now that we have scaled our dataset, let us train the random forest algorithm to solve this regression problem. from sklearn.ensemble import Random Forest Regressor  regressor = Random Forest Regressor(n_estimators=20,random_state=0)  regressor.fit(X_train, y_train)  y_pred = regressor.predict(X_test)The RandomForestRegressor is used to solve regression problems via random forest. The most important parameter of the RandomForestRegressor class is the n_estimators parameter. This parameter defines the number of trees in the random forest. Here we started with n_estimator=20 and check the performance of the algorithm. You can find details for all of the parameters of RandomForestRegressor here. Evaluating the Algorithm Let us evaluate the performance of the algorithm. For regression problems the metrics used to evaluate an algorithm are mean absolute error, mean squared error, and root mean squared error.  from sklearn import metrics  print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred))  print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred))  print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) Mean Absolute Error: 51.76500000000001 Mean Squared Error: 4216.166749999999 Root Mean Squared Error: 64.93201637097064 With 20 trees, the root mean squared error is 64.93 which is greater than 10 percent of the average petrol consumption i.e. 576.77. This may indicate, among other things, that we have not used enough estimators (trees). Let us now change the number of estimators to 200, the results are as follows: Mean Absolute Error: 48.33899999999999 Mean Squared Error: 3494.2330150000003  Root Mean Squared Error: 59.112037818028234 The graph below shows the decrease in the value of the root mean squared error (RMSE) with respect to number of estimators.  You will notice that the error values decrease with the increase in the number of estimators. You may consider 200 a good number for n_estimators as the rate of decrease in error diminishes. You may try playing around with other parameters to figure out a better result. Using Random Forest for ClassificationNow let us consider a classification problem to predict whether a bank currency note is authentic or not based on four attributes i.e. variance of the image wavelet transformed image, skewness, entropy, andkurtosis of the image. We will use Random Forest Classifier to solve this binary classification problem. Let’s get started. import pandas as pd  import numpy as np  dataset = pd.read_csv('/content/bill_authentication.csv')  dataset.head()VarianceSkewnessKurtosisEntropyClass03.621608.6661-2.8073-0.44699014.545908.1674-2.4586-1.46210023.86600-2.63831.92420.10645033.456609.5228-4.0112-3.59440040.32924-4.45524.5718-0.988800Similar to the data we used previously for the regression problem, this data is not scaled. Let us prepare the data for training. Preparing Data For Training The following code divides data into attributes and labels: X = dataset.iloc[:, 0:4].values  y = dataset.iloc[:, 4].values The following code divides data into training and testing sets:from sklearn.model_selection import train_test_split  X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) Feature Scaling We will do the same thing as we did for the previous problem. # Feature Scaling  from sklearn.preprocessing import StandardScaler  sc = StandardScaler()  X_train = sc.fit_transform(X_train)  X_test = sc.transform(X_test)Training the Algorithm Now that we have scaled our dataset, let us train the random forest algorithm to solve this classification problem. from sklearn.ensemble import Random Forest Classifier  classifier = RandomForestClassifier(n_estimators=20, random_state=0)  classifier.fit(X_train, y_train)  y_pred = classifier.predict(X_test)For classification, we have used RandomForestClassifier class of the sklearn.ensemble library. It takes n_estimators as a parameter. This parameter defines the number of trees in out random forest. Similar to the regression problem, we have started with 20 trees here. You can find details for all of the parameters of Random Forest Classifier here. Evaluating the Algorithm For evaluating classification problems,  the metrics used are accuracy, confusion matrix, precision recall, and F1 valuesfrom sklearn.metrics import classification_report, confusion_matrix, accuracy_score  print(confusion_matrix(y_test,y_pred))  print(classification_report(y_test,y_pred))  print(accuracy_score(y_test, y_pred)) The output will look something like this: Output:[ [ 155   2] [     1  117] ]Precisionrecallf1-scoresupport00.990.990.9915710.980.990.99118accuracy0.99275macro avg0.990.990.992750.98909090909090910.990.990.99275The accuracy achieved by our random forest classifier with 20 trees is 98.90%. Let us change the number of trees to 200.from sklearn.ensemble import Random Forest Classifier  classifier = Random Forest Classifier(n_estimators=200, random_state=0)  classifier.fit(X_train, y_train)  y_pred = classifier.predict(X_test) Output:[ [ 155   2] [     1  117] ]Precisionrecallf1-scoresupport00.990.990.9915710.980.990.99118accuracy0.99275macro avg0.990.990.992750.98909090909090910.990.990.99275Unlike the regression problem, changing the number of estimators for this problem did not make any difference in the results.An accuracy of 98.9% is pretty good. In this case, we have seen that there is not much improvement if the number of trees are increased. You may try playing around with other parameters of the RandomForestClassifier class and see if you can improve on our results. Advantages and Disadvantages of using Random Forest As with any algorithm, there are advantages and disadvantages to using it. Let us look into the pros and cons of using Random Forest for classification and regression. Advantages Random forest algorithm is unbiased as there are multiple trees and each tree is trained on a subset of data.  Random Forest algorithm is very stable. Introducing a new data in the dataset does not affect much as the new data impacts one tree and is pretty hard to impact all the trees. The random forest algorithm works well when you have both categorical and numerical features. With missing values in the dataset, the random forest algorithm performs very well. Disadvantages A major disadvantage of random forests lies in their complexity. More computational resources are required and also results in the large number of decision trees joined together. Due to their complexity, training time is more compared to other algorithms. Summary In this article we have covered what is ensemble learning and discussed about basic ensemble techniques. We also looked into bootstrap sampling involves iteratively resampling of a dataset with replacement which allows the model or algorithm to get a better understanding various features. Then we moved on to bagging followed by random forest. We also implemented random forest in Python for both regression and classification and came to a conclusion that increasing number of trees or estimators does not always make a difference in a classification problem. However, in regression there is an impact.  We have covered most of the topics related to algorithms in our series of machine learning blogs,click here. If you are inspired by the opportunities provided by machine learning, enrol in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape.Build your own projects using Machine Learning with Python. Practice with our industry experts on our live workshops now.
Rated 4.5/5 based on 12 customer reviews

Bagging and Random Forest in Machine Learning

16988
Bagging and Random Forest in Machine Learning

In today’s world, innovations happen on a daily basis, rendering all the previous versions of that product, service or skill-set outdated and obsolete. In such a dynamic and chaotic space, how can we make an informed decision without getting carried away by plain hype? To make the right decisions, we must follow a set of processes; investigate the current scenario, chart down your expectations, collect reviews from others, explore your options, select the best solution after weighing the pros and cons, make a decision and take the requisite action. 

For example, if you are looking to purchase a computer, will you simply walk up to the store and pick any laptop or notebook? It’s highly unlikely that you would do so. You would probably search on Amazon, browse a few web portals where people have posted their reviews and compare different models, checking for their features, specifications and prices. You will also probably ask your friends and colleagues for their opinion. In short, you would not directly jump to a conclusion, but will instead make a decision considering the opinions and reviews of other people as well. 

Bagging and Random Forest in Machine Learning

Ensemble models in machine learning also operate on a similar manner. They combine the decisions from multiple models to improve the overall performance. The objective of this article is to introduce the concept of ensemble learning and understand algorithms like bagging and random forest which use a similar technique. 

What is Ensemble Learning? 

Ensemble methods aim at improving the predictive performance of a given statistical learning or model fitting technique. The general principle of ensemble methods is to construct a linear combination of some model fitting method, instead of using a single fit of the method. 

An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. Ensemble methods combine several decision trees classifiers to produce better predictive performance than a single decision tree classifier. The main principle behind the ensemble model is that a group of weak learners come together to form a strong learner, thus increasing the accuracy of the model.When we try to predict the target variable using any machine learning technique, the main causes of difference in actual and predicted values are noise, variance, and bias. Ensemble helps to reduce these factors (except noise, which is irreducible error). The noise-related error is mainly due to noise in the training data and can't be removed. However, the errors due to bias and variance can be reduced.
The total error can be expressed as follows: 

Total Error = Bias + Variance + Irreducible Error 

A measure such as mean square error (MSE) captures all of these errors for a continuous target variable and can be represented as follows: 

Mean square error formula

Where, E stands for the expected mean, Y represents the actual target values and fˆ(x) is the predicted values for the target variable. It can be broken down into its components such as bias, variance and noise as shown in the following formula: 

Bias, variance and Noise Formula

Using techniques like Bagging and Boosting helps to decrease the variance and increase the robustness of the model. Combinations of multiple classifiers decrease variance, especially in the case of unstable classifiers, and may produce a more reliable classification than a single classifier. 

Ensemble Algorithm 

The goal of ensemble algorithms is to combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator. 

Ensemble Algorithm

There are two families of ensemble methods which are usually distinguished: 

  1. Averaging methods. The driving principle is to build several estimators independently and then to average their predictions. On average, the combined estimator is usually better than any of the single base estimator because its variance is reduced.|
    Examples: Bagging methods, Forests of randomized trees. 
  2. Boosting methods. Base estimators are built sequentially and one tries to reduce the bias of the combined estimator. The motivation is to combine several weak models to produce a powerful ensemble.
    Examples: AdaBoost, Gradient Tree Boosting.

Advantages of Ensemble Algorithm 

  • Ensemble is a proven method for improving the accuracy of the model and works in most of the cases. 
  • Ensemble makes the model more robust and stable thus ensuring decent performance on the test cases in most scenarios. 
  • You can use ensemble to capture linear and simple as well nonlinear complex relationships in the data. This can be done by using two different models and forming an ensemble of two. 

Disadvantages of Ensemble Algorithm 

  • Ensemble reduces the model interpret-ability and makes it very difficult to draw any crucial business insights at the end 
  • It is time-consuming and thus might not be the best idea for real-time applications 
  • The selection of models for creating an ensemble is an art which is really hard to master 

Basic Ensemble Techniques 

  • Max Voting: Max-voting is one of the simplest ways of combining predictions from multiple machine learning algorithms. Each base model makes a prediction and votes for each sample. The sample class with the highest votes is considered in the final predictive class. It is mainly used for classification problems.  
  • Averaging: Averaging can be used while estimating the probabilities in classification tasks. But it is usually used for regression problems. Predictions are extracted from multiple models and an average of the predictions are used to make the final prediction. 
  • Weighted Average: Like averaging, weighted averaging is also used for regression tasks. Alternatively, it can be used while estimating probabilities in classification problems. Base learners are assigned different weights, which represent the importance of each model in the prediction. 

Ensemble Methods 

Ensemble methods became popular as a relatively simple device to improve the predictive performance of a base procedure. There are different reasons for this: the bagging procedure turns out to be a variance reduction scheme, at least for some base procedures. On the other hand, boosting methods are primarily reducing the (model) bias of the base procedure. This already indicates that bagging and boosting are very different ensemble methods. From the perspective of prediction, random forests is about as good as boosting, and often better than bagging.  

Bootstrap Aggregation or Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. 

  • It combines Bootstrapping and Aggregation to form one ensemble model 
  • Reduces the variance error and helps to avoid overfitting 

Bagging algorithms include: 

  • Bagging meta-estimator 
  • Random forest 

Boosting refers to a family of algorithms which converts weak learner to strong learners. Boosting is a sequential process, where each subsequent model attempts to correct the errors of the previous model. Boosting is focused on reducing the bias. It makes the boosting algorithms prone to overfitting. To avoid overfitting, parameter tuning plays an important role in boosting algorithms. Some examples of boosting are mentioned below: 

  • AdaBoost 
  • GBM 
  • XGBM 
  • Light GBM 
  • CatBoost 

Why use ensemble models? 

Ensemble models help in improving algorithm accuracy as well as the robustness of a model. Both Bagging and Boosting should be known by data scientists and machine learning engineers and especially people who are planning to attend data science/machine learning interviews. 

Ensemble learning uses hundreds to thousands of models of the same algorithm and then work hand in hand to find the correct classification. You may also consider the fable of the blind men and the elephant to understand ensemble learning, where each blind man found a feature of the elephant and they all thought it was something different. However, if they would work together and discussed among themselves, they might have figured out what it is. 

Using techniques like bagging and boosting leads to increased robustness of statistical models and decreased variance. Now the question becomes, between these different “B” words. Which is better? 

Which is better, Bagging or Boosting? 

There is no perfectly correct answer to that. It depends on the data, the simulation and the circumstances. 

Bagging and Boosting decrease the variance of your single estimate as they combine several estimates from different models. So the result may be a model with higher stability

If the problem is that the single model gets a very low performance, Bagging will rarely get a better bias. However, Boosting could generate a combined model with lower errors as it optimizes the advantages and reduces pitfalls of the single model. 

By contrast, if the difficulty of the single model is overfitting, then Bagging is the best option. Boosting for its part doesn’t help to avoid over-fitting; in fact, this technique is faced with this problem itself. For this reason, Bagging is effective more often than Boosting. In this article we will discuss about Bagging, we will cover Boosting in the next post. But first, let us look into the very important concept of bootstrapping. 

Bootstrap Sampling 

Sampling is the process of selecting a subset of observations from the population with the purpose of estimating some parameters about the whole population. Resampling methods, on the other hand, are used to improve the estimates of the population parameters. 

Bootstrap Sampling in Machine Learning

In machine learning, the bootstrap method refers to random sampling with replacement. This sample is referred to as a resample. This allows the model or algorithm to get a better understanding of the various biases, variances and features that exist in the resample. Taking a sample of the data allows the resample to contain different characteristics then it might have contained as a whole. This is demonstrated in figure 1 where each sample population has different pieces, and none are identical. This would then affect the overall mean, standard deviation and other descriptive metrics of a data set. In turn, it can develop more robust models. 

Bootstrapping is also great for small size data sets that can have a tendency to overfit. In fact, we recommended this to one company who was concerned because their data sets were far from “Big Data”. Bootstrapping can be a solution in this case because algorithms that utilize bootstrapping can be more robust and handle new data sets depending on the methodology chosen(boosting or bagging). 

The reason behind using the bootstrap method is because it can test the stability of a solution. By using multiple sample data sets and then testing multiple models, it can increase robustness. Perhaps one sample data set has a larger mean than another, or a different standard deviation. This might break a model that was overfit, and not tested using data sets with different variations. 

One of the many reasons bootstrapping has become very common is because of the increase in computing power. This allows for many times more permutations to be done with different resamples than previously. Bootstrapping is used in both Bagging and Boosting 

Let us assume we have a sample of ‘n’ values (x) and we’d like to get an estimate of the mean of the sample. 

mean(x) = 1/n * sum(x) 

Consider a sample of 100 values (x) and we’d like to get an estimate of the mean of the sample. We can calculate the mean directly from the sample as: 

Formula

We know that our sample is small and that the mean has an error in it. We can improve the estimate of our mean using the bootstrap procedure: 

  1. Create many (e.g. 1000) random sub-samples of the data set with replacement (meaning we can select the same value multiple times). 
  2. Calculate the mean of each sub-sample 
  3. Calculate the average of all of our collected means and use that as our estimated mean for the data 

Example: Suppose we used 3 re-samples and got the mean values 2.3, 4.5 and 3.3. Taking the average of these we could take the estimated mean of the data to be 3.367. This process can be used to estimate other quantities like the standard deviation and even quantities used in machine learning algorithms, like learned coefficients. 

While using Python, we do not have to implement the bootstrap method manually. The scikit-learn library provides an implementation that creates a single bootstrap sample of a dataset. 

The resample() scikit-learn function can be used for sampling. It takes as arguments the data array, whether or not to sample with replacement, the size of the sample, and the seed for the pseudorandom number generator used prior to the sampling. 

For example, let us create a bootstrap that creates a sample with replacement with 4 observations and uses a value of 1 for the pseudorandom number generator. 

boot = resample(data, replace=True, n_samples=4, random_state=1)

As the bootstrap API does not allow to easily gather the out-of-bag observations that could be used as a test set to evaluate a fit model, in the univariate case we can gather the out-of-bag observations using a simple Python list comprehension. 

# out of bag observations 
oob = [x for x in data if x not in boot]

Let us look at a small example and execute it.

# scikit-learn bootstrap 
from sklearn.utils import resample 
# data sample 
data = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6] 
# prepare bootstrap sample 
boot = resample(data, replace=True, n_samples=4, random_state=1) 
print('Bootstrap Sample: %s' % boot) 
# out of bag observations 
oob = [x for x in data if x not in boot] 
print('OOB Sample: %s' % oob) 

The output will include the observations in the bootstrap sample and those observations in the out-of-bag sample.

Bootstrap Sample: [0.6, 0.4, 0.5, 0.1] 
OOB Sample: [0.2, 0.3]

Bagging 

Bootstrap Aggregation, also known as Bagging, is a powerful ensemble method that was proposed by Leo Breiman in 1994 to prevent overfitting. The concept behind bagging is to combine the predictions of several base learners to create a more accurate output. Bagging is the application of the Bootstrap procedure to a high-variance machine learning algorithm, typically decision trees. 

  1. Suppose there are N observations and M features. A sample from observation is selected randomly with replacement (Bootstrapping). 
  2. A subset of features are selected to create a model with sample of observations and subset of features. 
  3. Feature from the subset is selected which gives the best split on the training data. 
  4. This is repeated to create many models and every model is trained in parallel 
  5. Prediction is given based on the aggregation of predictions from all the models. 

This approach can be used with machine learning algorithms that have a high variance, such as decision trees. A separate model is trained on each bootstrap sample of data and the average output of those models used to make predictions. This technique is called bootstrap aggregation or bagging for short. 

Variance means that an algorithm’s performance is sensitive to the training data, with high variance suggesting that the more the training data is changed, the more the performance of the algorithm will vary. 

The performance of high variance machine learning algorithms like unpruned decision trees can be improved by training many trees and taking the average of their predictions. Results are often better than a single decision tree. 

What Bagging does is help reduce variance from models that are might be very accurate, but only on the data they were trained on. This is also known as overfitting. 

Overfitting is when a function fits the data too well. Typically this is because the actual equation is much too complicated to take into account each data point and outlier. 

Overfitting in Machine Learning

Bagging gets around this by creating its own variance amongst the data by sampling and replacing data while it tests multiple hypothesis(models). In turn, this reduces the noise by utilizing multiple samples that would most likely be made up of data with various attributes(median, average, etc). 

Once each model has developed a hypothesis. The models use voting for classification or averaging for regression. This is where the “Aggregating” in “Bootstrap Aggregating” comes into play. Each hypothesis has the same weight as all the others. When we later discuss boosting, this is one of the places the two methodologies differ. 

Bagging in Machine Learning

Essentially, all these models run at the same time, and vote on which hypothesis is the most accurate. 

This helps to decrease variance i.e. reduce the overfit. 

Advantages 

  • Bagging takes advantage of ensemble learning wherein multiple weak learners outperform a single strong learner.  
  • It helps reduce variance and thus helps us avoid overfitting. 

Disadvantages 

  • There is loss of interpretability of the model. 
  • There can possibly be a problem of high bias if not modeled properly. 
  • While bagging gives us more accuracy, it is computationally expensive and may not be desirable depending on the use case. 

There are many bagging algorithms of which perhaps the most prominent would be Random Forest.  

Decision Trees 

Decision trees are simple but intuitive models. Using a top-down approach, a root node creates binary splits unless a particular criteria is fulfilled. This binary splitting of nodes results in a predicted value on the basis of the interior nodes which lead to the terminal or the final nodes. For a classification problem, a decision tree will output a predicted target class for each terminal node produced. We have covered decision tree algorithm  in detail for both classification and regression in another article. 

Limitations to Decision Trees 

Decision trees tend to have high variance when they utilize different training and test sets of the same data, since they tend to overfit on training data. This leads to poor performance when new and unseen data is added. This limits the usage of decision trees in predictive modeling. However, using ensemble methods, models that utilize decision trees can be created as a foundation for producing powerful results. 

Bootstrap Aggregating Trees 

We have already discussed about bootstrap aggregating (or bagging), we can create an ensemble (forest) of trees where multiple training sets are generated with replacement, meaning data instances. Once the training sets are created, a CART model can be trained on each subsample. 

Features of Bagged Trees 

  • Reduces variance by averaging the ensemble's results. 
  • The resulting model uses the entire feature space when considering node splits. 
  • Bagging trees allow the trees to grow without pruning, reducing the tree-depth sizes and resulting in high variance but lower bias, which can help improve predictive power. 

Limitations to Bagging Trees 

The main limitation of bagging trees is that it uses the entire feature space when creating splits in the trees. Suppose some variables within the feature space are indicating certain predictions, there is a risk of having a forest of correlated trees, which actually  increases bias and reduces variance. 

Why a Forest is better than One Tree?

The main objective of a machine learning model is to generalize properly to new and unseen data. When we have a flexible model, overfitting takes place. A flexible model is said to have high variance because the learned parameters (such as the structure of the decision tree) will vary with the training data. 

On the other hand, an inflexible model is said to have high bias as it makes assumptions about the training data. An inflexible model may not have the capacity to fit even the training data and in both cases — high variance and high bias — the model is not able to generalize new and unseen data properly. 

You can through the article on one of the foundational concepts in machine learning, bias-variance tradeoff which will help you understand that the balance between creating a model that is so flexible memorizes the training data and an inflexible model cannot learn the training data.  

The main reason why decision tree is prone to overfitting when we do not limit the maximum depth is because it has unlimited flexibility, which means it keeps growing unless there is one leaf node for every single observation. 

Instead of limiting the depth of the tree which results in reduced variance and increase in bias, we can combine many decision trees into a single ensemble model known as the random forest

What is Random Forest algorithm? 

Random forest is like bootstrapping algorithm with Decision tree (CART) model. Suppose we have 1000 observations in the complete population with 10 variables. Random forest will try to build multiple CART along with different samples and different initial variables. It will take a random sample of 100 observations and then chose 5 initial variables randomly to build a CART model. It will go on repeating the process say about 10 times and then make a final prediction on each of the observations. Final prediction is a function of each prediction. This final prediction can simply be the mean of each prediction. 

The random forest is a model made up of many decision trees. Rather than just simply averaging the prediction of trees (which we could call a “forest”), this model uses two key concepts that gives it the name random

  1. Random sampling of training data points when building trees 
  2. Random subsets of features considered when splitting nodes 

How the Random Forest Algorithm Works 

The basic steps involved in performing the random forest algorithm are mentioned below: 

  1. Pick N random records from the dataset. 
  2. Build a decision tree based on these N records. 
  3. Choose the number of trees you want in your algorithm and repeat steps 1 and 2. 
  4. In case of a regression problem, for a new record, each tree in the forest predicts a value for Y (output). The final value can be calculated by taking the average of all the values predicted by all the trees in the forest. Or, in the case of a classification problem, each tree in the forest predicts the category to which the new record belongs. Finally, the new record is assigned to the category that wins the majority vote. 

Using Random Forest for Regression 

Here we have a problem where we have to predict the gas consumption (in millions of gallons) in 48 US states based on petrol tax (in cents), per capita income (dollars), paved highways (in miles) and the proportion of population with the driving license. We will use the random forest algorithm via the Scikit-Learn Python library to solve this regression problem. 

First we import the necessary libraries and our dataset. 

import pandas as pd 
import numpy as np 
dataset = pd.read_csv('/content/petrol_consumption.csv') 
dataset.head() 

Petrol_taxAverage_incomepaved_HighwaysPopulation_Driver_licence(%)Petrol_Consumption
09.0357119760.525541
19.0409212500.572524
29.0386515860.580561
37.5487023510.529414
48.043994310.544410

You will notice that the values in our dataset are not very well scaled. Let us scale them down before training the algorithm. 

Preparing Data For Training 

We will perform two tasks in order to prepare the data. Firstly we will divide the data into ‘attributes’ and ‘label’ sets. The resultant will then be divided into training and test sets. 

X = dataset.iloc[:, 0:4].values 
y = dataset.iloc[:, 4].values

Now let us divide the data into training and testing sets:

from sklearn.model_selection import train_test_split 
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

Feature Scaling 

The dataset is not yet a scaled value as you will see that the Average_Income field has values in the range of thousands while Petrol_tax has values in the range of tens. It will be better if we scale our data. We will use Scikit-Learn's StandardScaler class to do the same. 

# Feature Scaling 
from sklearn.preprocessing import StandardScaler 
sc = StandardScaler() 
X_train = sc.fit_transform(X_train) 
X_test = sc.transform(X_test)

Training the Algorithm 

Now that we have scaled our dataset, let us train the random forest algorithm to solve this regression problem. 

from sklearn.ensemble import Random Forest Regressor 
regressor = Random Forest Regressor(n_estimators=20,random_state=0) 
regressor.fit(X_train, y_train) 
y_pred = regressor.predict(X_test)

The RandomForestRegressor is used to solve regression problems via random forest. The most important parameter of the RandomForestRegressor class is the n_estimators parameter. This parameter defines the number of trees in the random forest. Here we started with n_estimator=20 and check the performance of the algorithm. You can find details for all of the parameters of RandomForestRegressor here

Evaluating the Algorithm 

Let us evaluate the performance of the algorithm. For regression problems the metrics used to evaluate an algorithm are mean absolute error, mean squared error, and root mean squared error.  

from sklearn import metrics 
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred)) 
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred)) 
print('Root Mean Squared Error:', 
np.sqrt(metrics.mean_squared_error(y_test, y_pred))) 
Mean Absolute Error: 51.76500000000001 
Mean Squared Error: 4216.166749999999 
Root Mean Squared Error: 64.93201637097064 

With 20 trees, the root mean squared error is 64.93 which is greater than 10 percent of the average petrol consumption i.e. 576.77. This may indicate, among other things, that we have not used enough estimators (trees). 

Let us now change the number of estimators to 200, the results are as follows: 

Mean Absolute Error: 48.33899999999999 
Mean Squared Error: 3494.2330150000003 
Root Mean Squared Error: 59.112037818028234 

The graph below shows the decrease in the value of the root mean squared error (RMSE) with respect to number of estimators.  

RMSE Graph in Machine Learning

You will notice that the error values decrease with the increase in the number of estimators. You may consider 200 a good number for n_estimators as the rate of decrease in error diminishes. You may try playing around with other parameters to figure out a better result. 

Using Random Forest for Classification

Now let us consider a classification problem to predict whether a bank currency note is authentic or not based on four attributes i.e. variance of the image wavelet transformed image, skewness, entropy, andkurtosis of the image. We will use Random Forest Classifier to solve this binary classification problem. Let’s get started. 

import pandas as pd 
import numpy as np 
dataset = pd.read_csv('/content/bill_authentication.csv') 
dataset.head()

VarianceSkewnessKurtosisEntropyClass
03.621608.6661-2.8073-0.446990
14.545908.1674-2.4586-1.462100
23.86600-2.63831.92420.106450
33.456609.5228-4.0112-3.594400
40.32924-4.45524.5718-0.988800

Similar to the data we used previously for the regression problem, this data is not scaled. Let us prepare the data for training. 

Preparing Data For Training 

The following code divides data into attributes and labels: 

X = dataset.iloc[:, 0:4].values 
y = dataset.iloc[:, 4].values 

The following code divides data into training and testing sets:

from sklearn.model_selection import train_test_split 
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) 

Feature Scaling 

We will do the same thing as we did for the previous problem. 

# Feature Scaling 
from sklearn.preprocessing import StandardScaler 
sc = StandardScaler() 
X_train = sc.fit_transform(X_train) 
X_test = sc.transform(X_test)

Training the Algorithm 

Now that we have scaled our dataset, let us train the random forest algorithm to solve this classification problem. 

from sklearn.ensemble import Random Forest Classifier 
classifier = RandomForestClassifier(n_estimators=20, random_state=0) 
classifier.fit(X_train, y_train) 
y_pred = classifier.predict(X_test)

For classification, we have used RandomForestClassifier class of the sklearn.ensemble library. It takes n_estimators as a parameter. This parameter defines the number of trees in out random forest. Similar to the regression problem, we have started with 20 trees here. You can find details for all of the parameters of Random Forest Classifier here

Evaluating the Algorithm 

For evaluating classification problems,  the metrics used are accuracy, confusion matrix, precision recall, and F1 values

from sklearn.metrics import classification_report, confusion_matrix, accuracy_score 
print(confusion_matrix(y_test,y_pred)) 
print(classification_report(y_test,y_pred)) 
print(accuracy_score(y_test, y_pred)) 

The output will look something like this: 

Output:

[ [ 155   2]
[     1  117] ]

Precisionrecallf1-scoresupport
00.990.990.99157
10.980.990.99118
accuracy
0.99275
macro avg0.990.990.99275
0.98909090909090910.990.990.99275

The accuracy achieved by our random forest classifier with 20 trees is 98.90%. Let us change the number of trees to 200.

from sklearn.ensemble import Random Forest Classifier 
classifier = Random Forest Classifier(n_estimators=200, random_state=0) 
classifier.fit(X_train, y_train) 
y_pred = classifier.predict(X_test) 

Output:

[ [ 155   2]
[     1  117] ]

Precisionrecallf1-scoresupport
00.990.990.99157
10.980.990.99118
accuracy
0.99275
macro avg0.990.990.99275
0.98909090909090910.990.990.99275

Unlike the regression problem, changing the number of estimators for this problem did not make any difference in the results.

Accuracy Random Forest in Machine Learning

An accuracy of 98.9% is pretty good. In this case, we have seen that there is not much improvement if the number of trees are increased. You may try playing around with other parameters of the RandomForestClassifier class and see if you can improve on our results. 

Advantages and Disadvantages of using Random Forest 

As with any algorithm, there are advantages and disadvantages to using it. Let us look into the pros and cons of using Random Forest for classification and regression. 

Advantages 

  • Random forest algorithm is unbiased as there are multiple trees and each tree is trained on a subset of data.  
  • Random Forest algorithm is very stable. Introducing a new data in the dataset does not affect much as the new data impacts one tree and is pretty hard to impact all the trees. 
  • The random forest algorithm works well when you have both categorical and numerical features. 
  • With missing values in the dataset, the random forest algorithm performs very well. 

Disadvantages 

  • A major disadvantage of random forests lies in their complexity. More computational resources are required and also results in the large number of decision trees joined together. 
  • Due to their complexity, training time is more compared to other algorithms. 

Summary 

In this article we have covered what is ensemble learning and discussed about basic ensemble techniques. We also looked into bootstrap sampling involves iteratively resampling of a dataset with replacement which allows the model or algorithm to get a better understanding various features. Then we moved on to bagging followed by random forest. We also implemented random forest in Python for both regression and classification and came to a conclusion that increasing number of trees or estimators does not always make a difference in a classification problem. However, in regression there is an impact.  

We have covered most of the topics related to algorithms in our series of machine learning blogs,click here. If you are inspired by the opportunities provided by machine learning, enrol in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape.


Build your own projects using Machine Learning with Python. Practice with our industry experts on our live workshops now.

Priyankur

Priyankur Sarkar

Data Science Enthusiast

Priyankur Sarkar loves to play with data and get insightful results out of it, then turn those data insights and results in business growth. He is an electronics engineer with a versatile experience as an individual contributor and leading teams, and has actively worked towards building Machine Learning capabilities for organizations.

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

Data Science: Correlation vs Regression in Statistics

In this article, we will understand the key differences between correlation and regression, and their significance. Correlation and regression are two different types of analyses that are performed on multi-variate distributions of data. They are mathematical concepts that help in understanding the extent of the relation between two variables: and the nature of the relationship between the two variables respectively. Correlation Correlation, as the name suggests is a word formed by combining ‘co’ and ‘relation’. It refers to the analysis of the relationship that is established between two variables in a given dataset. It helps in understanding (or measuring) the linear relationship between two variables.  Two variables are said to be correlated when a change in the value of one variable results in a corresponding change in the value of the other variable. This could be a direct or an indirect change in the value of variables. This indicates a relationship between both the variables.  Correlation is a statistical measure that deals with the strength of the relation between the two variables in question.  Correlation can be a positive or negative value. Positive Correlation Two variables are considered to be positively correlated when the value of one variable increases or decreases following an increase or decrease in the value of the other variable respectively.  Let us understand this better with the help of an example: Suppose you start saving your money in a bank, and they offer some amount of interest on the amount you save in the bank. The more the amount you store in the bank, the more interest you get on your money. This way, the money stored in a bank and the interest obtained on it are positively correlated. Let us take another example: While investing in stocks, it is usually said that higher the risk while investing in a stock, higher is the rate of returns on such stocks.  This shows a direct inverse relationship between the two variables since both of them increase/decrease when the other variable increases/decreases respectively. Negative Correlation Two variables are considered to be negatively correlated when the value of one variable increases following a decrease in the value of the other variable. Let us understand this with an example: Suppose a person is looking to lose weight. The one basic idea behind weight loss is reducing the number of calorie intake. When fewer calories are consumed and a significant number of calories are burnt, the rate of weight loss is quicker. This means when the amount of junk food eaten is decreased, weight loss increases. Let us take another example: Suppose a popular non-essential product that is being sold faces an increase in the price. When this happens, the number of people who purchase it will reduce and the demand would also reduce. This means, when the popularity and price of the product increases, the demand for the product reduces. An inverse proportion relationship is observed between the two variables since one value increases and the other value decreases or one value decreases and the other value increases.  Zero Correlation This indicates that there is no relationship between two variables. It is also known as a zero correlation. This is when a change in one variable doesn't affect the other variable in any way. Let us understand this with the help of an example: When the increase in height of our friend/neighbour doesn’t affect our height, since our height is independent of our friend’s height.  Correlation is used when there is a requirement to see if the two variables that are being worked upon are related to each other, and if they are, what the extent of this relationship is, and whether the values are positively or negatively correlated.  Pearson’s correlation coefficient is a popular measure to understand the correlation between two values.  Regression Regression is the type of analysis that helps in the prediction of a dependant value when the value of the independent variable is given. For example, given a dataset that contains two variables (or columns, if visualized as a table), a few rows of values for both the variables would be given. One or more of one of the variables (or column) would be missing, that needs to be found out. One of the variables would depend on the other, thereby forming an equation that relevantly represents the relationship between the two variables. Regression helps in predicting the missing value. Note: The idea behind any regression technique is to ensure that the difference between the predicted and the actual value is minimal, thereby reducing the error that occurs during the prediction of the dependent variable with the help of the independent variable. There are different types of regression and some of them have been listed below: Linear Regression This is one of the basic kinds of regression, which usually involves two variables, where one variable is known as the ‘dependent’ variable and the other one is known as an ‘independent’ variable. Given a dataset, a pattern has to be formed (linear equation) with the help of these two variables and this equation has to be used to fit the given data to a straight line. This straight-line needs to be used to predict the value for a given variable. The predicted values are usually continuous. Logistic Regression There are different types of logistic regression:  Binary logistic regression is a regression technique wherein there are only two types or categories of input that are possible, i.e 0 or 1, yes or no, true or false and so on. Multinomial logistic regression helps predict output wherein the outcome would belong to one of the more than two classes or categories. In other words, this algorithm is used to predict a nominal dependent variable. Ordinal logistic regression deals with dependant variables that need to be ranked while predicting it with the help of independent variables.  Ridge Regression It is also known as L2 regularization. It is a regression technique that helps in finding the best coefficients for a linear regression model with the help of an estimator that is known as ridge estimator. It is used in contrast to the popular ordinary least square method since the former has low variance and hence it calculates better coefficients. It doesn’t eliminate coefficients thereby not producing sparse, simple models.  Lasso Regression LASSO is an acronym that stands for ‘Least Absolute Shrinkage and Selection Operator’. It is a type of linear regression that uses the concept of ‘shrinkage’. Shrinkage is a process with the help of which values in a data set are reduced/shrunk to a certain base point (this could be mean, median, etc). It helps in creating simple, easy to understand, sparse models, i.e the models that have fewer parameters to deal with, thereby being simple.  Lasso regression is highly suited for models that have high collinearity levels, i.e a model where certain processes (such as model selection or parameter selection or variable selection) is automated.  It is used to perform L1 and L2 regularization. L1 regularization is a technique that adds a penalty to the given values of coefficients in the equation. This also results in simple, easy to use, sparse models that would contain lesser coefficients. Some of these coefficients can also be estimated off to 0 and hence eliminated from the model altogether. This way, the model becomes simple.  It is said that Lasso regression is easier to work with and understand in comparison to ridge regression.  There are significant differences between both these statistical concepts.  Difference between Correlation and Regression Let us summarize the difference between correlation and regression with the help of a table: CorrelationRegressionThere are two variables, and their relationship is understood and measured.Two variables are represented as 'dependent' and 'independent' variables, and the dependent variable is predicted.The relationship between the two variables is analysed.This concept tells about how one variable affects the other and tries to predict the dependant variable.The relationship between two variables (say ‘x’ and ‘y’) is the same if it is expressed as ‘x is related to y’ or ‘y is related to x’.There is a significant difference when we say ‘x depends on y’ and ‘y depends on x’. This is because the independent and dependent variables change.Correlation between two variables can be expressed through a single point on a graph, visually.A line or a curve is fitted to the given data, and the line or the curve is extrapolated to predict the data and make sure the line or the curve fits the data on the graph.It is a numerical value that tells about the strength of the relation between two variables.It predicts one variable based on the independent variables. (this predicted value can be continuous or discrete, depending on the type of regression) by fitting a straight line to the data.Conclusion In this article, we understood the significant differences between two statistical techniques, namely- correlation and regression with the help of examples. Correlation establishes a relationship between two variables whereas regression deals with the prediction of values and curve fitting. 
Rated 4.0/5 based on 14 customer reviews
9825
Data Science: Correlation vs Regression in Statist...

In this article, we will understand the key differ... Read More

A Peak Into the World of Data Science

Touted as the sexiest job in the 21st century, back in 2012 by Harvard Business Review, the data science world has since received a lot of attention across the entire world, cutting across industries and fields. Many people wonder what the fuss is all about. At the same time, others have been venturing into this field and have found their calling.  Eight years later, the chatter about data science and data scientists continues to garner headlines and conversations. Especially with the current pandemic, suddenly data science is on everyone’s mind. But what does data science encompass? With the current advent of technology, there are terabytes upon terabytes of data that organizations collect daily. From tracking the websites we visit - how long, how often - to what we purchase and where we go - our digital footprint is an immense source of data for a lot of businesses. Between our laptops, smartphones and our tablets - almost everything we do translates into some form of data.  On its own, this raw data will be of no use to anyone. Data science is the process that repackages the data to generate insights and answer business questions for the organization. Using domain understanding, programming and analytical skills coupled together with business sense and know-how, existing data is converted to provide actionable insights for an organization to drive business growth. The processed data is what is worth its weight in gold. By using data science, we can uncover existing insights and behavioural patterns or even predict future trends.  Here is where our highly-sought-after data scientists come in.  A data scientist is a multifaceted role in an organization. They have a wide range of knowledge as they need to marry a plethora of methods, processes and algorithms with computer science, statistics and mathematics to process the data in a format that answers the critical business questions meaningfully and with actionable insights for the organization. With these actionable data, the company can make plans that will be the most profitable to drive their business goals.  To churn out the insights and knowledge that everyone needs these days, data science has become more of a craft than a science despite its name. The data scientists need to be trained in mathematics yet have some creative and business sense to find the answers they are looking in the giant haystack of raw data. They are the ones responsible for helping to shape future business plans and goals.  It sounds like a mighty hefty job, doesn’t it? It is also why it is one of the most sought after jobs these days. The field is rapidly evolving, and keeping up with the latest developments takes a lot of dedication and time, in order to produce actionable data that the organizations can use.  The only constant through this realm of change is the data science project lifecycle. We will discuss briefly below on the critical areas of the project lifecycle. The natural tendency is to envision that it is a circular process immediately - but there will be a lot of working back and forth within some phases to ensure that the project runs smoothly.  Stage One: Business Understanding  As a child, were you one of those children that always asked why? Even when the adults would give you an answer, you followed up with a “why”? Those children will have probably grown up to be data scientists as it seems, their favourite question is: Why? By asking the why - they will get to know the problem that needs to be solved and the critical question will emerge. Once there is a clear understanding of the business problem and question, then the work can begin. Data scientists want to ensure that the insights that come from this question are supported by data and will allow the business to achieve the desired results. Therefore, the foundation stone to any data science project is in understanding the business.  Stage Two: Data Understanding  Once the problem and question have been confirmed, you need to start laying out the objectives of this project by determining the required variables to be predicted. You must know what you need from the data and what the data should address. You must collate all the information and data, which can be reasonably difficult. An agreement over the sources and the requirements of the data characteristics needs to be reached before moving forward.  Through this process, an efficient and insightful understanding is required of how the data can and will be used for the project. This operational management of the data is vital, as the data that is sourced at this stage will define the project and how effective the solutions will be in the end.  Stage Three: Data Preparation  It has been said quite often that a bulk of a data scientist’s time is spent in preparing the data for use. In this report from CrowdFlower in 2016, the percentage of time spent on cleaning and organizing data is pegged at 60%. That is more than half their day!  Since data comes in various forms, and from a multitude of sources, there will be no standardization or consistency throughout the data. Raw data needs to be managed and prepared - with all the incomplete values and attributes fixed, and all deconflicting values in the data eliminated. This process requires human intervention as you must be able to discern which data values are required to reach your end goal. If the data is not prepared according to the business understanding, the final result might not be suitable to address the issue.  Stage Four: Modeling Once the tedious process of preparation is over, it is time to get the results that will be required for this project lifecycle. There are various types of techniques that can be used, ranging from decision-tree building to neural network generation. You must decide which would be the best technique based on the question that needs to be answered. If required, multiple modeling techniques can be used; where each task must be performed individually. Generally, modeling techniques are applied more than once (per process), and there will be more than one technique used per project.  With each technique, parameters must be set based on specific criteria. You, as the data scientist, must apply your knowledge to judge the success of the modeling and rank the models used based on the results; according to pre-set criteria. Stage Five: Evaluation Once the results are churned out and extracted, we then need to refer back to the business query that we talked about in Stage One and decide if it answers the question raised; and if the model and data meet the objectives that the data science project has set out to address. The evaluation also can unveil other results that are not related to the business question but are good points for future direction or challenges that the organization might face. These results should be tabled for discussion and used for new data science projects. Final Stage: Deployment  This is almost the finishing line!  Now with the evaluated results, the team would need to sit down and have an in-depth discussion on what the data shows and what the business needs to do based on the data. The project team should come up with a suitable plan for deployment to address the issue. The deployment will still need to be monitored and assessed along the way to ensure that the project will be a successful one; backed by data.  The assessment would normally restart the project lifecycle; bringing you full circle.  Data is everywhere  In this day and age, we are surrounded by a multitude of data science applications as it crosses all industries. We will focus on these five industries, where data science is making waves. Banking & Finance  Financial institutions were the earliest adopters of data analytics, and they are all about data! From using data for fraud or anomaly detection in their banking transactions to risk analytics and algorithmic trading - one will find data plays a key role in all levels of a financial institution.  Risk analytics is one of the key areas where data science is used; as financial institutions depend on it to make strategic decisions for the financial health of the business. They need to assess each risk to manage and optimize their cost.  Logistics & Transportation  The world of logistics is a complex one. In a production line, raw materials sometimes come from all over the world to create a single product. A delay of any of the parts will affect the production line, and the output of stock will be affected drastically. If logistical delays can be predicted, the company can adjust quickly to another alternative to ensure that there will be no gap in the supply chain, ensuring that the production line will function at optimum efficiency.  Healthcare  2020 has been an interesting one. It has been a battle of a lifetime for many of us. Months have passed, and yet the virus still rages on to wreak havoc on lives and economies. Many countries have turned to data science applications to help with their fight against COVID-19. With so much data generated daily, people and governments need to know various things such as:  Epidemiological clusters so people can be quarantined to stop the spread of the virus tracking of symptoms over thousands of patients to understand how the virus transmits and mutates to find vaccines and  solutions to mitigate transmission. Manufacturing  In this field, millions can be on the line each day as there are so many moving parts that can cause delays, production issues, etc. Data science is primarily used to boost production rates, reduce cost (workforce or energy), predict maintenance and reduce risks on the production floor.  This allows the manufacturer to make plans to ensure that the production line is always operating at the optimum level, providing the best output at any given time.  Retail (Brick & Mortar, Online)  Have you ever wondered why some products in a shop are placed next to each other or how discounts on items work? All those are based on data science.  The retailers track people’s shopping routes, purchases and basket matching to work out details like where products should be placed; or what should go on sale and when to drive up the sales for an item. And that is just for the instore purchases.  Online data tracks what you are buying and suggests what you might want to buy next based on past purchase histories; or even tells you what you might want to add to your cart. That’s how your online supermarket suggests you buy bread if you have a jar of peanut butter already in your cart.  As a data scientist, you must always remember that the power in the data. You need to understand how the data can be used to find the desired results for your organization. The right questions must be asked, and it has become more of an art than a science. 
Rated 4.0/5 based on 10 customer reviews
9826
A Peak Into the World of Data Science

Touted as the sexiest job in the 21st century, ba... Read More

Future Proof Your Career With Data Skills

Data is everywhere, and we have all seen exponential growth in the data that is generated daily. Information must be extracted from this data to make sense of it, and we must gain insights from this information that will help us to understand repeating patterns. Analysing these patterns will help us to know more about consumers and their behaviour, hence provide services and manufacture products that will benefit both the organization as well as the consumers. This is where Data Science comes into the picture. The art of analysing the data, extracting patterns, applying algorithms, tweaking the data to suit our requirements, and more – are all parts of data science. The field has seen massive growth in the last few years and this growth may not stop for the next 5 years at least. Some people are apprehensive about the future, but the opportunities in the field of data science are improving day by day, creating new paths for people to get in and contribute their expertise/experiences. What is Data Science? As mentioned previously, data is generated in large amounts daily. This data generation happens everywhere, ranging from a small organization to multi-national companies. This kind of data is also known as 'big data', given their specific characteristics in terms of volume, type of data and speed at which the data gets generated. It is important to make use of this big data by processing it into something useful so that the organizations can use advanced analytics and insights to their advantage (generating better profits, more customer-reach, and so on). Who is a Data Scientist? A data scientist is a person who is trained and experienced in working with data, i.e. data gathering, data cleaning, data preparation, data transformation, and data analysis. These steps will help understand the data, extract hidden patterns and put forward insights about the data. All these processes are done with the help of algorithms which are specially designed to perform a specific task. Many analyses have revealed that Data Scientist, Machine Learning Engineer, Artificial Intelligence Engineer are some of the most sought-after jobs. Not to forget the high pay that comes with it. Data science is an intricate combination of mathematics, statistics, analytics, and computer science. Mathematics & statistics are required to understand the ideas behind the algorithms and their working. On the other hand, analytics is associated with many data cleaning, transformation, preparation and analytics operations that are performed on the data with the help of computer science (programming languages). All these skills (which a data scientist possesses) will help the businesses to thrive. Data scientists are usually those who are able to find out why things work the way they do, why they don’t work as expected, what has gone wrong in the business and how it can be fixed. All these are different processes in the world of data analytics. They would also have to interact with potential stakeholders, discuss business challenges and help improve it.  What would a day in the life of a Data Scientist look like? If the general idea of stand-up meetings and sprint meetings is not taken into consideration, a day in the life of a data scientist would revolve around gathering data, understanding it, talking to relevant people about the data, asking questions about it, reiterating the requirement and the end product, and working on how it can be achieved. It looks like this: Data collection This part deals with the collection of raw data from various resources. These resources include websites, various social media platforms, people’s profiles, and so on. All this data needs to be collected and stored in a place which is easy to access while working with the data. Data cleaning This is considered as one of the most important steps in data science. This is because good data yields great results, whereas noisy, unclean, missing and redundant data yields unsatisfactory results. Once raw data has been collected, it needs to be accessed and cleaned by various methods. Redundant rows or columns have to be deleted, missing data either needs to be filled or deleted, irrelevant columns have to be eliminated, and so on. Data transformations In this step, the data (which is usually in the form of row x column) is converted into a format that is required by the algorithm to process upon. For example- a text analysis task may require data in the form of text whereas a prediction or regression problem may require data in the form of a table, i.e. rows and columns. Based on the requirement and the end product, data has to be transformed into the respective formats. Using statistics, machine learning algorithms to solve the problem and extract insights The basics of statistics are considered to be a foundation while working as a data scientist. Understanding distributions, priors probabilities, posteriors probabilities, Bayesian theorem serve as a foundation while working with the data. The data needs to be interpreted and mangled with the help of statistics. It helps understand and solve problems by helping the data scientist extract meaningful and relevant insights/patterns from data.  Up-to-date sector knowledge It is important to stay up-to-date, know the new trends, packages, frameworks, new releases and changes that occur frequently if not on a daily basis. It is important to adapt and use whatever revolutionary technology comes our way and seems to be helpful in the specific scenario. It is essential to stay on top by knowing new algorithms, techniques, data mining algorithms, and so on.  It is important to keep learning, revising your career plan and update the skills that are necessary for the current world. Updating knowledge is vital to pursue future opportunities and make sure that your career path is aligned with your personal interests.  What about the salary? Salaries for Data Scientists are on the higher end of the spectrum, with a mean salary of about £60,000. With experience and constantly upgrading the skills, the salary can go up to £100,000 too. This also depends on the organization. If it is a start-up or a new company, they might not pay as much, but as and when the company grows, the pay-out increases. On similar lines, experience increases along with skills, which would make a data scientist more valuable to the organization.  Analytics has revealed that the number of data science-related jobs will see a surge and it has also been labelled as the 'sexiest job of the 21st century’. The demand is growing steadily. Almost every organization wishes to have a machine learning wing where data scientists would be much needed.  What are the pre-requisites to becoming a Data Scientist? If it is one of the companies from FAANG – Facebook, Amazon, Apple, Netflix, Google, it is not really a requirement to have a bachelor's degree or anything. There might be certain positions that require specific qualifications, but entry-level positions don't usually have any specific requirement of a degree. These companies certainly expect the data scientists to be hands-on in one or two programming languages (object-oriented such as C++ or Java, and Python). They might also require knowing specific frameworks (TensorFlow, Keras), deep learning algorithms (Neural networks, convolutional neural networks, recurrent neural networks), NumPy, Pandas and so on.  Machine learning is a concept which data scientists will have to be familiar with. This doesn’t mean just the definitions. It involves understanding the algorithms, the mathematical working behind it, the kind of results it would provide, the kind of cases where certain algorithms can be used and how the output can be improved by tweaking certain parameters present in the algorithm.  It is also essential to understand where Machine Learning can be used, and how it plays an important role in understanding the data, and prediction as well.  It never harms to get a bachelor’s degree, a master’s degree and a PhD. All these degrees add formidable value to the knowledge already gained.  How do I start on my Data Science Journey? Any job requires a resume where the relevant skills are mentioned in the right way and format. It is important to present yourself in the right way, and also exhibit the enthusiasm of learning and being updated. An entry-level data scientist job will require the basics of object-oriented programming, Python, scientific computing packages, basics of machine learning, statistics, analytics and hands-on programming abilities. Taking up certain foundational courses, working hands-on in projects, internships and group projects also help in providing a considerable amount of experience around working with data. It is also important to have a niche for data, be able to play around with data, extract patterns, have an eye out for insights, packages that could be used, the approach and so on.  Conclusion Technology will create new jobs, but Data Science and Artificial Intelligence will be a major part of our life in the upcoming years. This means some jobs may be lost too (because many processes which seem too trivial would be automated) but think about the new jobs that would be created! Instead of worrying about the jobs that would be lost due to AI replacing their work, it is essential to foresee and adapt to it. We have seen technologies revolutionizing the current world, and all this has happened because of ‘change’, because of how people have foreseen the circumstances and have adapted to it. For example, AI will not replace a doctor. But AI can replace a doctor without AI knowledge. We need to be up-to-date with the current trends, the technologies and the new and ever-changing requirements of the real world. The focus should be on learning to work with machines, not outwork them. 
Rated 4.0/5 based on 10 customer reviews
10984
Future Proof Your Career With Data Skills

Data is everywhere, and we have all seen exponenti... Read More