Search

How to Interpret R Squared and Goodness of Fit in Regression Analysis

Regression Analysis is a set of statistical processes that are at the core of data science. In the field of numerical simulation, it represents the most well-understood models and helps in interpreting machine learning algorithms. Their real-life applications can be seen in a wide range of domains, ranging from advertising and medical research to agricultural science and even different sports. In linear regression models, R-squared is a goodness-fit-measure. It takes into account the strength of the relationship between the model and the dependent variable. Its convenience is measured on a scale of 0 – 100%. Once you have a fit linear regression model, there are a few considerations that you need to address: How well does the model fit the data? How well does it explain the changes in the dependent variable? In this article, we will learn about R-squared (R2), its interpretation, limitations, and a few miscellaneous insights about it. Let us first understand the fundamentals of Regression Analysis and its necessity. What is Regression Analysis? Regression Analysis is a well-known statistical learning technique that allows you to examine the relationship between the independent variables (or explanatory variables) and the dependent variables (or response variables). It requires you to formulate a mathematical model that can be used to determine an estimated value which is nearly close to the actual value. The two terms essential to understanding Regression Analysis: Dependent variables - The factors that you want to understand or predict. Independent variables - The factors that influence the dependent variable. Consider a situation where you are given data about a group of students on certain factors: number of hours of study per day, attendance, and scores in a particular exam. The Regression technique allows you to identify the most essential factors, the factors that can be ignored and the dependence of one factor on others.  There are mainly two objectives of a Regression Analysis technique: Explanatory analysis - This analysis understands and identifies the influence of the explanatory variable on the response variable concerning a certain model. Predictive analysis - This analysis is used to predict the value assumed by the dependent variable.  Why use Regression Analysis? The technique generates a regression equation where the relationship between the explanatory variable and the response variable is represented by the parameters of the technique. You can use the Regression Analysis to perform the following: To model different independent variables. To add continuous and categorical variables having numerous distinct groups based on a characteristic. To model the curvature using polynomial terms. To determine the effect of a certain independent variable on another variable by assessing the interaction terms.  What are Residuals? Residuals identify the deviation of observed values from the expected values. They are also referred to as error or noise terms. A residual gives an insight into how good our model is against the actual value but there are no real-life representations of residual values. Source:  hatarilabs.comRegression Line and residual plotsThe calculation of the real values of intercept, slope, and residual terms can be a complicated task. However, the Ordinary Least Square (OLS) regression technique can help us to speculate on an efficient model.  The technique minimizes the sum of the squared residuals. With the help of the residual plots, you can check whether the observed error is consistent with the stochastic error (differences between the expected and observed values must be random and unpredictable).  What is Goodness-of-Fit?  The Regression Analysis is a part of the linear regression technique. It examines an equation that reduces the distance between the fitted line and all of the data points. Determining how well the model fits the data is crucial in a linear model. A general idea is that if the deviations between the observed values and the predicted values of the linear model are small and unbiased, the model has a well-fit data.  In technical terms, “Goodness-of-fit” is a mathematical model that describes the differences between the observed values and the expected values or how well the model fits a set of observations. This measure can be used in statistical hypothesis testing. How to assess Goodness-of-fit in a regression model? According to statisticians, if the differences between the observations and the predicted values tend to be small and unbiased, we can say that the model fits the data well. The meaning of unbiasedness in this context is that the fitted values do not reach the extremes, i.e. too high or too low during observations. As we have seen earlier, a linear regression model gives you the outlook of the equation which represents the minimal difference between the observed values and the predicted values. In simpler terms, we can say that linear regression identifies the smallest sum of squared residuals probable for the dataset. Determining the residual plots represents a crucial part of a regression model and it should be performed before evaluating the numerical measures of goodness-of-fit, like R-squared. They help to recognize a biased model by identifying problematic patterns in the residual plots.  However, if you have a biased model, you cannot depend on the results. If the residual plots look good, you can assess the value of R-squared and other numerical outputs. What is R-squared? In data science, R-squared (R2) is referred to as the coefficient of determination or the coefficient of multiple determination in case of multiple regression.  In the linear regression model, R-squared acts as an evaluation metric to evaluate the scatter of the data points around the fitted regression line. It recognizes the percentage of variation of the dependent variable.  R-squared and the Goodness-of-fit R-squared is the proportion of variance in the dependent variable that can be explained by the independent variable.The value of R-squared stays between 0 and 100%: 0% corresponds to a model that does not explain the variability of the response data around its mean. The mean of the dependent variable helps to predict the dependent variable and also the regression model. On the other hand, 100% corresponds to a model that explains the variability of the response variable around its mean. If your value of R2  is large, you have a better chance of your regression model fitting the observations. Although you can get essential insights about the regression model in this statistical measure, you should not depend on it for the complete assessment of the model. It does not give information about the relationship between the dependent and the independent variables.  It also does not inform about the quality of the regression model. Hence, as a user, you should always analyze R2   along with other variables and then derive conclusions about the regression model. Visual Representation of R-squared You can have a visual demonstration of the plots of fitted values by observed values in a graphical manner. It illustrates how R-squared values represent the scatter around the regression line. As observed in the pictures above, the value of R-squared for the regression model on the left side is 17%, and for the model on the right is 83%. In a regression model, when the variance accounts to be high, the data points tend to fall closer to the fitted regression line.  However, a regression model with an R2 of 100% is an ideal scenario which is actually not possible. In such a case, the predicted values equal the observed values and it causes all the data points to fall exactly on the regression line.  Interpretation of R-squared The simplest interpretation of R-squared is how well the regression model fits the observed data values. Let us take an example to understand this. Consider a model where the  R2  value is 70%. This would mean that the model explains 70% of the fitted data in the regression model. Usually, when the R2  value is high, it suggests a better fit for the model.  The correctness of the statistical measure does not only depend on R2  but can depend on other several factors like the nature of the variables, the units on which the variables are measured, etc. So, a high R-squared value is not always likely for the regression model and can indicate problems too. A low R-squared value is a negative indicator for a model in general. However, if we consider the other factors, a low R2 value can also end up in a good predictive model. Calculation of R-squared R- squared can be evaluated using the following formula: Where: SSregression – Explained sum of squares due to the regression model. SStotal – The total sum of squares. The sum of squares due to regression assesses how well the model represents the fitted data and the total sum of squares measures the variability in the data used in the regression model. Now let us come back to the earlier situation where we have two factors: number of hours of study per day and the score in a particular exam to understand the calculation of R-squared more effectively. Here, the target variable is represented by the score and the independent variable by the number of hours of study per day.  In this case, we will need a simple linear regression model and the equation of the model will be as follows:  ŷ = w1x1 + bThe parameters w1  and  b can be calculated by reducing the squared error over all the data points. The following equation is called the least square function:minimize ∑(yi –  w1x1i – b)2Now, to calculate the goodness-of-fit, we need to calculate the variance:var(u) = 1/n∑(ui – ū)2where, n represents the number of data points. Now, R-squared calculates the amount of variance of the target variable explained by the model, i.e. function of the independent variable. However, in order to achieve that, we need to calculate two things: Variance of the target variable: var(avg) = ∑(yi – Ӯ)2Variance of the target variable around the best-fit line:var(model) = ∑(yi – ŷ)2Finally, we can calculate the equation of R-squared as follows:R2 = 1 – [var(model)/var(avg)] = 1 -[∑(yi – ŷ)2/∑(yi – Ӯ)2] Limitations of R-squared Some of the limitations of R-squared are: R-squared cannot be used to check if the coefficient estimates and predictions are biased or not. R-squared does not inform if the regression model has an adequate fit or not. To determine the biasedness of the model, you need to assess the residuals plots. A good model can have a low R-squared value whereas you can have a high R-squared value for a model that does not have proper goodness-of-fit.  Low R-squared and High R-squared values Regression models with low R2 do not always pose a problem. There are some areas where you are bound to have low R2 values. One such case is when you study human behavior. They tend to have R2  values less than 50%. The reason behind this is that predicting people is a more difficult task than predicting a physical process. You can draw essential conclusions about your model having a low R2 value when the independent variables of the model have some statistical significance. They represent the mean change in the dependent variable when the independent variable shifts by one unit. However, if you are working on a model to generate precise predictions, low R-squared values can cause problems. Now, let us look at the other side of the coin. A regression model with high R2  value can lead to – as the statisticians call it – specification bias. This type of situation arises when the linear model is underspecified due to missing important independent variables, polynomial terms, and interaction terms.  To overcome this situation, you can produce random residuals by adding the appropriate terms or by fitting a non-linear model. Model overfitting and data mining techniques can also inflate the value of R2. The model they generate might provide an excellent fit to the data but actually the results tend to be completely deceptive. Conclusion Let us summarize what we have covered in this article so far: Regression Analysis and its importance Residuals and Goodness-of-fit R-squared: Representation, Interpretation, Calculation, Limitations Low and High R2 values Although R-squared is a very intuitive measure to determine how well a regression model fits a dataset, it does not narrate the complete story. If you want to get the full picture, you need to have an in-depth knowledge of R2  along with other statistical measures and residual plots. For gaining more information on the limitations of the R-squared, you can learn about Adjusted R-squared and Predicted R-squared which provide different insights to assess a model’s goodness-of-fit. You can also take a look at a different type of goodness-of-fit measure, i.e. Standard Error of the Regression. 
Rated 4.0/5 based on 11 customer reviews

How to Interpret R Squared and Goodness of Fit in Regression Analysis

10K
How to Interpret R Squared and Goodness of Fit in Regression Analysis

Regression Analysis is a set of statistical processes that are at the core of data science. In the field of numerical simulation, it represents the most well-understood models and helps in interpreting machine learning algorithms. Their real-life applications can be seen in a wide range of domains, ranging from advertising and medical research to agricultural science and even different sports. 

In linear regression models, R-squared is a goodness-fit-measure. It takes into account the strength of the relationship between the model and the dependent variable. Its convenience is measured on a scale of 0 – 100%. 

Once you have a fit linear regression model, there are few considerations that you need to address: 

  • How well does the model fit the data? 
  • How well does it explain the changes in the dependent variable? 

In this article, we will learn about R-squared (R2), its interpretation, limitations, and few miscellaneous insights about it. 

Let us first understand the fundamentals of Regression Analysis and its necessity. 

What is Regression Analysis? 

Regression Analysis is a well-known statistical learning technique that allows you to examine the relationship between the independent variables (or explanatory variables) and the dependent variables (or response variables). It requires you to formulate a mathematical model that can be used to determine an estimated value which is nearly close to the actual value. 

The two terms essential to understanding Regression Analysis: 

  • Dependent variables - The factors that you want to understand or predict. 
  • Independent variables - The factors that influence the dependent variable. 

Consider a situation where you are given data about a group of students on certain factors: number of hours of study per day, attendance, and scores in a particular exam. The Regression technique allows you to identify the most essential factors, the factors that can be ignored and the dependence of one factor on others.  

There are mainly two objectives of a Regression Analysis technique: 

  • Explanatory analysis - This analysis understands and identifies the influence of the explanatory variable on the response variable concerning a certain model. 
  • Predictive analysis - This analysis is used to predict the value assumed by the dependent variable.  

Why use Regression Analysis? 

The technique generates a regression equation where the relationship between the explanatory variable and the response variable is represented by the parameters of the technique. 

You can use the Regression Analysis to perform the following: 

  • To model different independent variables. 
  • To add continuous and categorical variables having numerous distinct groups based on a characteristic. 
  • To model the curvature using polynomial terms. 
  • To determine the effect of a certain independent variable on another variable by assessing the interaction terms.  

What are Residuals? 

Residuals identify the deviation of observed values from the expected values. They are also referred to as error or noise terms. A residual gives an insight into how good our model is against the actual value but there are no real-life representations of residual values. 

How To Interpret R squared and Goodness of Fit in Regression Analysis  

Source:  hatarilabs.com

Regression Line and residual plots

The calculation of the real values of intercept, slope, and residual terms can be a complicated task. However, the Ordinary Least Square (OLS) regression technique can help us to speculate on an efficient model.  The technique minimizes the sum of the squared residuals. With the help of the residual plots, you can check whether the observed error is consistent with the stochastic error (differences between the expected and observed values must be random and unpredictable).  

What is Goodness-of-Fit?  

The Regression Analysis is a part of the linear regression technique. It examines an equation that reduces the distance between the fitted line and all of the data points. Determining how well the model fits the data is crucial in a linear model. 

A general idea is that if the deviations between the observed values and the predicted values of the linear model are small and unbiased, the model has a well-fit data.  

In technical terms, “Goodness-of-fit” is a mathematical model that describes the differences between the observed values and the expected values or how well the model fits a set of observations. This measure can be used in statistical hypothesis testing. 

How to assess Goodness-of-fit in a regression model? 

According to statisticians, if the differences between the observations and the predicted values tend to be small and unbiased, we can say that the model fits the data well. The meaning of unbiasedness in this context is that the fitted values do not reach the extremes, i.e. too high or too low during observations. 

As we have seen earlier, a linear regression model gives you the outlook of the equation which represents the minimal difference between the observed values and the predicted values. In simpler terms, we can say that linear regression identifies the smallest sum of squared residuals probable for the dataset. 

Determining the residual plots represents a crucial part of a regression model and it should be performed before evaluating the numerical measures of goodness-of-fit, like R-squared. They help to recognize a biased model by identifying problematic patterns in the residual plots.  

However, if you have a biased model, you cannot depend on the results. If the residual plots look good, you can assess the value of R-squared and other numerical outputs. 

What is R-squared? 

In data science, R-squared (R2) is referred to as the coefficient of determination or the coefficient of multiple determination in case of multiple regression.  

In the linear regression model, R-squared acts as an evaluation metric to evaluate the scatter of the data points around the fitted regression line. It recognizes the percentage of variation of the dependent variable.  

R-squared and the Goodness-of-fit 

R-squared is the proportion of variance in the dependent variable that can be explained by the independent variable.

How To Interpret R squared and Goodness of Fit in Regression Analysis  

The value of R-squared stays between 0 and 100%: 

  • 0% corresponds to a model that does not explain the variability of the response data around its mean. The mean of the dependent variable helps to predict the dependent variable and also the regression model. 
  • On the other hand, 100% corresponds to a model that explains the variability of the response variable around its mean. 

If your value of R is large, you have a better chance of your regression model fitting the observations. 

Although you can get essential insights about the regression model in this statistical measure, you should not depend on it for the complete assessment of the model. It does not give information about the relationship between the dependent and the independent variables.  

It also does not inform about the quality of the regression model. Hence, as a user, you should always analyze R2   along with other variables and then derive conclusions about the regression model. 

Visual Representation of R-squared 

You can have a visual demonstration of the plots of fitted values by observed values in a graphical manner. It illustrates how R-squared values represent the scatter around the regression line. How To Interpret R squared and Goodness of Fit in Regression Analysis  

As observed in the pictures above, the value of R-squared for the regression model on the left side is 17%, and for the model on the right is 83%. In a regression model, when the variance accounts to be high, the data points tend to fall closer to the fitted regression line.  

However, a regression model with an R2 of 100% is an ideal scenario which is actually not possible. In such a case, the predicted values equal the observed values and it causes all the data points to fall exactly on the regression line.  

Interpretation of R-squared 

The simplest interpretation of R-squared is how well the regression model fits the observed data values. Let us take an example to understand this. 

Consider a model where the  R2  value is 70%. This would mean that the model explains 70% of the fitted data in the regression model. Usually, when the R value is high, it suggests a better fit for the model.  

The correctness of the statistical measure does not only depend on R2  but can depend on other several factors like the nature of the variables, the units on which the variables are measured, etc. So, a high R-squared value is not always likely for the regression model and can indicate problems too. 

A low R-squared value is a negative indicator for a model in general. However, if we consider the other factors, a low Rvalue can also end up in a good predictive model. 

Calculation of R-squared 

R- squared can be evaluated using the following formula: How To Interpret R squared and Goodness of Fit in Regression Analysis  

Where: 

  • SSregression – Explained sum of squares due to the regression model. 
  • SStotal  The total sum of squares. 

The sum of squares due to regression assesses how well the model represents the fitted data and the total sum of squares measures the variability in the data used in the regression model. 

Now let us come back to the earlier situation where we have two factors: number of hours of study per day and the score in a particular exam to understand the calculation of R-squared more effectively. Here, the target variable is represented by the score and the independent variable by the number of hours of study per day.  

How To Interpret R squared and Goodness of Fit in Regression Analysis  In this case, we will need a simple linear regression model and the equation of the model will be as follows:  

ŷ = w1x1 + b

The parameters w1  and  b can be calculated by reducing the squared error over all the data points. The following equation is called the least square function:

minimize ∑(yi –  w1x1i – b)2

How To Interpret R squared and Goodness of Fit in Regression Analysis  

Now, to calculate the goodness-of-fit, we need to calculate the variance:

var(u) = 1/n∑(ui – ū)2

where, n represents the number of data points. 

Now, R-squared calculates the amount of variance of the target variable explained by the model, i.e. function of the independent variable. 

However, in order to achieve that, we need to calculate two things: 

  • Variance of the target variable: 

var(avg) = ∑(yi – Ӯ)2

  • Variance of the target variable around the best-fit line:

var(model) = ∑(yi – ŷ)2

How To Interpret R squared and Goodness of Fit in Regression Analysis  

Finally, we can calculate the equation of R-squared as follows:

R2 = 1 – [var(model)/var(avg)] = 1 -[∑(yi – ŷ)2/∑(yi – Ӯ)2] 

Limitations of R-squared 

Some of the limitations of R-squared are: 

  • R-squared cannot be used to check if the coefficient estimates and predictions are biased or not. 
  • R-squared does not inform if the regression model has an adequate fit or not. 

To determine the biasedness of the model, you need to assess the residuals plots. A good model can have a low R-squared value whereas you can have a high R-squared value for a model that does not have proper goodness-of-fit.  

Low R-squared and High R-squared values 

Regression models with low R2 do not always pose a problem. There are some areas where you are bound to have low Rvalues. One such case is when you study human behavior. They tend to have R values less than 50%. The reason behind this is that predicting people is a more difficult task than predicting a physical process. 

You can draw essential conclusions about your model having a low Rvalue when the independent variables of the model have some statistical significance. They represent the mean change in the dependent variable when the independent variable shifts by one unit. 

However, if you are working on a model to generate precise predictions, low R-squared values can cause problems. 

Now, let us look at the other side of the coin. A regression model with high R2  value can lead to – as the statisticians call it – specification bias. This type of situation arises when the linear model is underspecified due to missing important independent variables, polynomial terms, and interaction terms.  

To overcome this situation, you can produce random residuals by adding the appropriate terms or by fitting a non-linear model. 

Model overfitting and data mining techniques can also inflate the value of R2. The model they generate might provide an excellent fit to the data but actually the results tend to be completely deceptive. 

Conclusion 

Let us summarize what we have covered in this article so far: 

  • Regression Analysis and its importance 
  • Residuals and Goodness-of-fit 
  • R-squared: Representation, Interpretation, Calculation, Limitations 
  • Low and High Rvalues 

Although R-squared is a very intuitive measure to determine how well a regression model fits a dataset, it does not narrate the complete story. If you want to get the full picture, you need to have an in-depth knowledge of R2  along with other statistical measures and residual plots. 

For gaining more information on the limitations of the R-squared, you can learn about Adjusted R-squared and Predicted R-squared which provide different insights to assess a model’s goodness-of-fit. You can also take a look at a different type of goodness-of-fit measure, i.e. Standard Error of the Regression. 

KnowledgeHut

KnowledgeHut

Author

KnowledgeHut is an outcome-focused global ed-tech company. We help organizations and professionals unlock excellence through skills development. We offer training solutions under the people and process, data science, full-stack development, cybersecurity, future technologies and digital transformation verticals.
Website : https://www.knowledgehut.com

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

A Peak Into the World of Data Science

Touted as the sexiest job in the 21st century, back in 2012 by Harvard Business Review, the data science world has since received a lot of attention across the entire world, cutting across industries and fields. Many people wonder what the fuss is all about. At the same time, others have been venturing into this field and have found their calling.  Eight years later, the chatter about data science and data scientists continues to garner headlines and conversations. Especially with the current pandemic, suddenly data science is on everyone’s mind. But what does data science encompass? With the current advent of technology, there are terabytes upon terabytes of data that organizations collect daily. From tracking the websites we visit - how long, how often - to what we purchase and where we go - our digital footprint is an immense source of data for a lot of businesses. Between our laptops, smartphones and our tablets - almost everything we do translates into some form of data.  On its own, this raw data will be of no use to anyone. Data science is the process that repackages the data to generate insights and answer business questions for the organization. Using domain understanding, programming and analytical skills coupled together with business sense and know-how, existing data is converted to provide actionable insights for an organization to drive business growth. The processed data is what is worth its weight in gold. By using data science, we can uncover existing insights and behavioural patterns or even predict future trends.  Here is where our highly-sought-after data scientists come in.  A data scientist is a multifaceted role in an organization. They have a wide range of knowledge as they need to marry a plethora of methods, processes and algorithms with computer science, statistics and mathematics to process the data in a format that answers the critical business questions meaningfully and with actionable insights for the organization. With these actionable data, the company can make plans that will be the most profitable to drive their business goals.  To churn out the insights and knowledge that everyone needs these days, data science has become more of a craft than a science despite its name. The data scientists need to be trained in mathematics yet have some creative and business sense to find the answers they are looking in the giant haystack of raw data. They are the ones responsible for helping to shape future business plans and goals.  It sounds like a mighty hefty job, doesn’t it? It is also why it is one of the most sought after jobs these days. The field is rapidly evolving, and keeping up with the latest developments takes a lot of dedication and time, in order to produce actionable data that the organizations can use.  The only constant through this realm of change is the data science project lifecycle. We will discuss briefly below on the critical areas of the project lifecycle. The natural tendency is to envision that it is a circular process immediately - but there will be a lot of working back and forth within some phases to ensure that the project runs smoothly.  Stage One: Business Understanding  As a child, were you one of those children that always asked why? Even when the adults would give you an answer, you followed up with a “why”? Those children will have probably grown up to be data scientists as it seems, their favourite question is: Why? By asking the why - they will get to know the problem that needs to be solved and the critical question will emerge. Once there is a clear understanding of the business problem and question, then the work can begin. Data scientists want to ensure that the insights that come from this question are supported by data and will allow the business to achieve the desired results. Therefore, the foundation stone to any data science project is in understanding the business.  Stage Two: Data Understanding  Once the problem and question have been confirmed, you need to start laying out the objectives of this project by determining the required variables to be predicted. You must know what you need from the data and what the data should address. You must collate all the information and data, which can be reasonably difficult. An agreement over the sources and the requirements of the data characteristics needs to be reached before moving forward.  Through this process, an efficient and insightful understanding is required of how the data can and will be used for the project. This operational management of the data is vital, as the data that is sourced at this stage will define the project and how effective the solutions will be in the end.  Stage Three: Data Preparation  It has been said quite often that a bulk of a data scientist’s time is spent in preparing the data for use. In this report from CrowdFlower in 2016, the percentage of time spent on cleaning and organizing data is pegged at 60%. That is more than half their day!  Since data comes in various forms, and from a multitude of sources, there will be no standardization or consistency throughout the data. Raw data needs to be managed and prepared - with all the incomplete values and attributes fixed, and all deconflicting values in the data eliminated. This process requires human intervention as you must be able to discern which data values are required to reach your end goal. If the data is not prepared according to the business understanding, the final result might not be suitable to address the issue.  Stage Four: Modeling Once the tedious process of preparation is over, it is time to get the results that will be required for this project lifecycle. There are various types of techniques that can be used, ranging from decision-tree building to neural network generation. You must decide which would be the best technique based on the question that needs to be answered. If required, multiple modeling techniques can be used; where each task must be performed individually. Generally, modeling techniques are applied more than once (per process), and there will be more than one technique used per project.  With each technique, parameters must be set based on specific criteria. You, as the data scientist, must apply your knowledge to judge the success of the modeling and rank the models used based on the results; according to pre-set criteria. Stage Five: Evaluation Once the results are churned out and extracted, we then need to refer back to the business query that we talked about in Stage One and decide if it answers the question raised; and if the model and data meet the objectives that the data science project has set out to address. The evaluation also can unveil other results that are not related to the business question but are good points for future direction or challenges that the organization might face. These results should be tabled for discussion and used for new data science projects. Final Stage: Deployment  This is almost the finishing line!  Now with the evaluated results, the team would need to sit down and have an in-depth discussion on what the data shows and what the business needs to do based on the data. The project team should come up with a suitable plan for deployment to address the issue. The deployment will still need to be monitored and assessed along the way to ensure that the project will be a successful one; backed by data.  The assessment would normally restart the project lifecycle; bringing you full circle.  Data is everywhere  In this day and age, we are surrounded by a multitude of data science applications as it crosses all industries. We will focus on these five industries, where data science is making waves. Banking & Finance  Financial institutions were the earliest adopters of data analytics, and they are all about data! From using data for fraud or anomaly detection in their banking transactions to risk analytics and algorithmic trading - one will find data plays a key role in all levels of a financial institution.  Risk analytics is one of the key areas where data science is used; as financial institutions depend on it to make strategic decisions for the financial health of the business. They need to assess each risk to manage and optimize their cost.  Logistics & Transportation  The world of logistics is a complex one. In a production line, raw materials sometimes come from all over the world to create a single product. A delay of any of the parts will affect the production line, and the output of stock will be affected drastically. If logistical delays can be predicted, the company can adjust quickly to another alternative to ensure that there will be no gap in the supply chain, ensuring that the production line will function at optimum efficiency.  Healthcare  2020 has been an interesting one. It has been a battle of a lifetime for many of us. Months have passed, and yet the virus still rages on to wreak havoc on lives and economies. Many countries have turned to data science applications to help with their fight against COVID-19. With so much data generated daily, people and governments need to know various things such as:  Epidemiological clusters so people can be quarantined to stop the spread of the virus tracking of symptoms over thousands of patients to understand how the virus transmits and mutates to find vaccines and  solutions to mitigate transmission. Manufacturing  In this field, millions can be on the line each day as there are so many moving parts that can cause delays, production issues, etc. Data science is primarily used to boost production rates, reduce cost (workforce or energy), predict maintenance and reduce risks on the production floor.  This allows the manufacturer to make plans to ensure that the production line is always operating at the optimum level, providing the best output at any given time.  Retail (Brick & Mortar, Online)  Have you ever wondered why some products in a shop are placed next to each other or how discounts on items work? All those are based on data science.  The retailers track people’s shopping routes, purchases and basket matching to work out details like where products should be placed; or what should go on sale and when to drive up the sales for an item. And that is just for the instore purchases.  Online data tracks what you are buying and suggests what you might want to buy next based on past purchase histories; or even tells you what you might want to add to your cart. That’s how your online supermarket suggests you buy bread if you have a jar of peanut butter already in your cart.  As a data scientist, you must always remember that the power in the data. You need to understand how the data can be used to find the desired results for your organization. The right questions must be asked, and it has become more of an art than a science. Image Source: Data science Life Cycle
Rated 4.0/5 based on 10 customer reviews
9828
A Peak Into the World of Data Science

Touted as the sexiest job in the 21st century, ba... Read More

Activation Functions for Deep Neural Networks

The Universal Approximation Theorem Any predictive model is a mathematical function, y = f(x) that can map the features (x) to the target variable (y). The function, f(x) can be a linear function or it can be a fairly complex nonlinear function. The function, f(x) can help predict with high accuracy depending on the distribution of the data. In the case of neural networks, it would also depend on the type of network architecture that's employed. The Universal Approximation Theorem says that irrespective of what the f(x) is, a neural network model can be built that can approximately deliver the desired result. In order to build a proper neural network architecture, let us take a look at the activation functions. What are Activation Functions? Simply put, activation functions define the output of neurons given a certain set of inputs. Activation functions are mathematical functions that are added to neural network models to enable the models to learn complex patterns. An activation function takes in the output from the previous layer, passes it through the mathematical function to convert it into some form, that can be considered as an input for the next computation layer. Activation functions determine the final accuracy of a network model while also contributing to the computational efficiency of building the model. Why do we need Activation Functions? In a neural network, if we add the hidden layers as the weighted sum of the inputs, this would translate into a linear function which is equivalent to a linear regression model. Image source: Neural Network ArchitectureIn the above diagram, we see the hidden layer is simply the weighted sum of the inputs from the input layer. For example, b1 = bw1 + a1w1 + a2w3 which is nothing but a linear function.  Multi-layer neural network models can classify linearly inseparable classes. However, in order to do so, we need the network to be transformed to a nonlinear function. For this nonlinear transformation to happen, we would pass the weighted sum of the inputs through an activation function. These activation functions are nonlinear functions which are applied at the hidden layers. Each hidden layer can have different activation functions, though mostly all neurons in each layer will have the same activation function. Types of Activation Functions? In this section we discuss the following: Linear Function Threshold Activation Function Bipolar Activation Function Logistic Sigmoid Function Bipolar Sigmoid Function Hyperbolic Tangent Function Rectified Linear Unit Function Swish Function (proposed by Google Brain - a deep learning artificial intelligence research team at Google) Linear Function: A linear function is similar to a straight line, y=mx. Irrespective of the number of hidden layers, if all the layers are linear in nature, then the final output is also simply a linear function of the input values. Hence we take a look at the other activation functions which are non-linear in nature and can help learn complex patterns. Threshold Activation Function: In this case, if the input is above a certain value, the neuron is activated. However, it is to note that this function provides either a 1 or a 0 as the output. In other words, if we need to classify certain inputs into more than 2 categories, a Threshold-Activation function is not a suitable one. Because of its binary output nature, this function is also known as binary-step activation function.Threshold Activation FunctionBipolar Activation Function: This is similar to the threshold function that was explained above. However, this activation function will return an output of either -1 or +1 based on a threshold.Bipolar Activation FunctionLogistic Sigmoid Function: One of the most frequently used activation functions is the Logistic Sigmoid Function. Its output ranges between 0 and 1 and is plotted as an ‘S’ shaped graph.Logistic Sigmoid FunctionThis is a nonlinear function and is characterised by a small change in x that would lead to large change in y. This activation function is generally used for binary classification where the expected output is 0 or 1. This activation function provides an output between 0 and 1 and a default threshold of 0.5 is considered to convert the continuous output to 0 or 1 for classifying the observationsAnother variation of the Logistic Sigmoid function is the Bipolar Sigmoid Function. This activation function is a rescaled version of the Logistic Sigmoid Function which provides an output in the range of -1 to +1.Bipolar Logistic FunctionHyperbolic Tangent Function: This activation function is quite similar to the sigmoid function. Its output ranges between -1 to +1.Hyperbolic Tangent FunctionRectified Linear Activation Function: This activation function, also known as ReLU, outputs the input if it is positive, else will return zero. That is to say, if the input is zero or less, this function will return 0 or will return the input itself. This function mostly behaves like a linear function because of which the computational simplicity is achieved.This activation function has become quite popular and is often used because of its computational efficiency compared to sigmoid and the hyperbolic tangent function that helps the model converge faster.  Another critical point to note is that while the sigmoid & the hyperbolic tangent function tries to approximate a zero value, the Rectified Linear Activation Functions can return true zero.Rectified Linear Units Activation FunctionOne disadvantage of ReLU is that when the inputs are close to zero or negative, the gradient of the function becomes zero. This causes a problem for the algorithm while performing back-propagation and in turn the model cannot converge. This is commonly termed as the “Dying” ReLU problem. There are a few variations of the ReLU activation function, such as, Noisy ReLU, Leaky ReLU, Parametric ReLU and Exponential Linear Units (ELU) Leaky ReLU which is a modified version of ReLU, helps solve the “Dying” ReLU problem. It helps perform back-propagation even when the inputs are negative. Leaky ReLU, unlike ReLU, defines a small linear component of x when x is a negative value. With this change in leaky ReLU, the gradient can be of non-zero value instead of zero thus avoiding dead neurons. However, this might also bring in a challenge with Leaky ReLU when it comes to predicting negative values.  Exponential Linear Unit (ELU) is another variant of ReLU, which unlike ReLU and leaky ReLU, uses a log curve instead of a straight line to define the negative values. Swish Activation Function: Swish is a new activation function that has been proposed by Google Brain. While ReLU returns zero for negative values, Swish doesn’t return a zero for negative inputs. Swish is a self-gating technique which implies that while normal gates require multiple scalar inputs, self-gating technique requires a single input only. Swish has certain properties - Unlike ReLU, Swish is a smooth and non-monotonic function which makes it more acceptable compared to ReLU. Swish is unbounded above and bounded below.  Swish is represented as x · σ(βx), where σ(z) = (1 + exp(−z))−1 is the sigmoid function and β is a constant or a trainable parameter.  Activation functions in deep learning and the vanishing gradient descent problem Gradient based methods are used by various algorithms to train the models. Neural networks algorithm uses stochastic gradient descent method to train the model. A neural network algorithm randomly assigns weights to the layers and once the output is predicted, it calculates the prediction errors. It uses these errors to estimate a gradient that can be used to update the weights in the network. This is done in order to reduce the prediction errors. The error gradient is updated backward from the output layer to the input layer.  It is preferred to build a neural network model with a larger number of hidden layers. With more hidden layers, the neural network model can achieve enhanced capability to perform more accurately.  One problem with too many layers is that the gradient diminishes pretty fast as it moves from the output layer to the input layer, i.e. during the back propagation. By the time it reaches the other end backward, it is quite possible that the error might get too small to make any effect on the model performance improvement. Basically, this is a situation where some difficulty is faced while training a neural network model using gradient based methods.  This is known as the vanishing gradient descent problem. Gradient based methods might face this challenge when certain activation functions are used in the network.  In deep neural networks, various activations functions are used. However when training deep neural network models, the vanishing gradient descent problems can demonstrate unstable behavior.  Various workaround solutions have been proposed to solve this problem. The most commonly used activation function is the ReLU activation function that has proven to perform way better than any other previously existing activation functions like sigmoid or hyperbolic tangent. As mentioned above, Swish improves upon ReLU being a smooth and non-monotonic function. However, though the vanishing gradient descent problem is much less severe in Swish, it does not completely avoid the vanishing gradient descent problem. To tackle this problem, a new activation function has been proposed. “The activation function in the neural network is one of the important aspects which facilitates the deep training by introducing the nonlinearity into the learning process. However, because of zero-hard rectification, some of the existing activation functions such as ReLU and Swish miss to utilize the large negative input values and may suffer from the dying gradient problem. Thus, it is important to look for a better activation function which is free from such problems.... The proposed LiSHT activation function is an attempt to scale the non-linear Hyperbolic Tangent (Tanh) function by a linear function and tackle the dying gradient problem… A very promising performance improvement is observed on three different types of neural networks including Multi-layer Perceptron (MLP), Convolutional Neural Network (CNN) and Recurrent Neural Network like Long-short term memory (LSTM).“   - Swalpa Kumar Roy, Suvojit Manna, et al, Jan 2019 In a paper published here, Swalpa Kumar Roy, Suvojit Manna, et al proposes a new non-parametric activation function - the Linearly Scaled Hyperbolic Tangent (LiSHT) - for Neural Networks that attempts to tackle the vanishing gradient descent problem. 
Rated 4.0/5 based on 15 customer reviews
8480
Activation Functions for Deep Neural Networks

The Universal Approximation Theorem Any predictiv... Read More

Data Science: Correlation vs Regression in Statistics

In this article, we will understand the key differences between correlation and regression, and their significance. Correlation and regression are two different types of analyses that are performed on multi-variate distributions of data. They are mathematical concepts that help in understanding the extent of the relation between two variables: and the nature of the relationship between the two variables respectively. Correlation Correlation, as the name suggests is a word formed by combining ‘co’ and ‘relation’. It refers to the analysis of the relationship that is established between two variables in a given dataset. It helps in understanding (or measuring) the linear relationship between two variables.  Two variables are said to be correlated when a change in the value of one variable results in a corresponding change in the value of the other variable. This could be a direct or an indirect change in the value of variables. This indicates a relationship between both the variables.  Correlation is a statistical measure that deals with the strength of the relation between the two variables in question.  Correlation can be a positive or negative value. Positive Correlation Two variables are considered to be positively correlated when the value of one variable increases or decreases following an increase or decrease in the value of the other variable respectively.  Let us understand this better with the help of an example: Suppose you start saving your money in a bank, and they offer some amount of interest on the amount you save in the bank. The more the amount you store in the bank, the more interest you get on your money. This way, the money stored in a bank and the interest obtained on it are positively correlated. Let us take another example: While investing in stocks, it is usually said that higher the risk while investing in a stock, higher is the rate of returns on such stocks.  This shows a direct inverse relationship between the two variables since both of them increase/decrease when the other variable increases/decreases respectively. Negative Correlation Two variables are considered to be negatively correlated when the value of one variable increases following a decrease in the value of the other variable. Let us understand this with an example: Suppose a person is looking to lose weight. The one basic idea behind weight loss is reducing the number of calorie intake. When fewer calories are consumed and a significant number of calories are burnt, the rate of weight loss is quicker. This means when the amount of junk food eaten is decreased, weight loss increases. Let us take another example: Suppose a popular non-essential product that is being sold faces an increase in the price. When this happens, the number of people who purchase it will reduce and the demand would also reduce. This means, when the popularity and price of the product increases, the demand for the product reduces. An inverse proportion relationship is observed between the two variables since one value increases and the other value decreases or one value decreases and the other value increases.  Zero Correlation This indicates that there is no relationship between two variables. It is also known as a zero correlation. This is when a change in one variable doesn't affect the other variable in any way. Let us understand this with the help of an example: When the increase in height of our friend/neighbour doesn’t affect our height, since our height is independent of our friend’s height.  Correlation is used when there is a requirement to see if the two variables that are being worked upon are related to each other, and if they are, what the extent of this relationship is, and whether the values are positively or negatively correlated.  Pearson’s correlation coefficient is a popular measure to understand the correlation between two values.  Regression Regression is the type of analysis that helps in the prediction of a dependant value when the value of the independent variable is given. For example, given a dataset that contains two variables (or columns, if visualized as a table), a few rows of values for both the variables would be given. One or more of one of the variables (or column) would be missing, that needs to be found out. One of the variables would depend on the other, thereby forming an equation that relevantly represents the relationship between the two variables. Regression helps in predicting the missing value. Note: The idea behind any regression technique is to ensure that the difference between the predicted and the actual value is minimal, thereby reducing the error that occurs during the prediction of the dependent variable with the help of the independent variable. There are different types of regression and some of them have been listed below: Linear Regression This is one of the basic kinds of regression, which usually involves two variables, where one variable is known as the ‘dependent’ variable and the other one is known as an ‘independent’ variable. Given a dataset, a pattern has to be formed (linear equation) with the help of these two variables and this equation has to be used to fit the given data to a straight line. This straight-line needs to be used to predict the value for a given variable. The predicted values are usually continuous. Logistic Regression There are different types of logistic regression:  Binary logistic regression is a regression technique wherein there are only two types or categories of input that are possible, i.e 0 or 1, yes or no, true or false and so on. Multinomial logistic regression helps predict output wherein the outcome would belong to one of the more than two classes or categories. In other words, this algorithm is used to predict a nominal dependent variable. Ordinal logistic regression deals with dependant variables that need to be ranked while predicting it with the help of independent variables.  Ridge Regression It is also known as L2 regularization. It is a regression technique that helps in finding the best coefficients for a linear regression model with the help of an estimator that is known as ridge estimator. It is used in contrast to the popular ordinary least square method since the former has low variance and hence it calculates better coefficients. It doesn’t eliminate coefficients thereby not producing sparse, simple models.  Lasso Regression LASSO is an acronym that stands for ‘Least Absolute Shrinkage and Selection Operator’. It is a type of linear regression that uses the concept of ‘shrinkage’. Shrinkage is a process with the help of which values in a data set are reduced/shrunk to a certain base point (this could be mean, median, etc). It helps in creating simple, easy to understand, sparse models, i.e the models that have fewer parameters to deal with, thereby being simple.  Lasso regression is highly suited for models that have high collinearity levels, i.e a model where certain processes (such as model selection or parameter selection or variable selection) is automated.  It is used to perform L1 and L2 regularization. L1 regularization is a technique that adds a penalty to the given values of coefficients in the equation. This also results in simple, easy to use, sparse models that would contain lesser coefficients. Some of these coefficients can also be estimated off to 0 and hence eliminated from the model altogether. This way, the model becomes simple.  It is said that Lasso regression is easier to work with and understand in comparison to ridge regression.  There are significant differences between both these statistical concepts.  Difference between Correlation and Regression Let us summarize the difference between correlation and regression with the help of a table: CorrelationRegressionThere are two variables, and their relationship is understood and measured.Two variables are represented as 'dependent' and 'independent' variables, and the dependent variable is predicted.The relationship between the two variables is analysed.This concept tells about how one variable affects the other and tries to predict the dependant variable.The relationship between two variables (say ‘x’ and ‘y’) is the same if it is expressed as ‘x is related to y’ or ‘y is related to x’.There is a significant difference when we say ‘x depends on y’ and ‘y depends on x’. This is because the independent and dependent variables change.Correlation between two variables can be expressed through a single point on a graph, visually.A line or a curve is fitted to the given data, and the line or the curve is extrapolated to predict the data and make sure the line or the curve fits the data on the graph.It is a numerical value that tells about the strength of the relation between two variables.It predicts one variable based on the independent variables. (this predicted value can be continuous or discrete, depending on the type of regression) by fitting a straight line to the data.Conclusion In this article, we understood the significant differences between two statistical techniques, namely- correlation and regression with the help of examples. Correlation establishes a relationship between two variables whereas regression deals with the prediction of values and curve fitting. 
Rated 4.0/5 based on 14 customer reviews
9827
Data Science: Correlation vs Regression in Statist...

In this article, we will understand the key differ... Read More