What is Regression Analysis? Types, Techniques, Examples

Read it in 15 Mins

Last updated on
28th Nov, 2022
Published
07th Jan, 2022
Views
9,015
What is Regression Analysis? Types, Techniques, Examples

As a Data Science enthusiast, you might already know that a majority of business decisions these days are data-driven. However, it is essential to understand how to parse through all the data and types of big data. One of the most important types of data analysis in this field is Regression Analysis. 

Regression Analysis is a form of predictive modeling technique mainly used in statistics. The term “regression” in this context, was first coined by Sir Francis Galton, a cousin of Sir Charles Darwin. The earliest form of regression was developed by Adrien-Marie Legendre and Carl Gauss - a method of least squares. 

Before getting into the what and how of regression analysis, let us first understand why regression analysis is essential and r squared meaning. 

Why is regression analysis important? 

The evaluation of relationship between two or more variables is called Regression Analysis. It is a statistical technique.  

Regression Analysis helps enterprises to understand what their data points represent,and use them wisely in coordination with different business analytical techniques in order to make better decisions. 

Regression Analysis helps an individual to understand how the typical value of the dependent variable changes when one of the independent variables is varied, while the other independent variables remain unchanged.  Therefore, this powerful statistical tool is used by Business Analysts and other data professionals for removing the unwanted variables and choosing only the important ones. 

The benefit of regression analysis is that it allows data crunching to help businesses make better decisions. A greater understanding of the variables can impact the success of a business in the coming weeks, months, and years in the future.  

Data Science 

The regression method of forecasting, as the name implies, is used for forecasting and for finding the casual relationship between variables. From a business point of view, the regression method of forecasting can be helpful for an individual working with data in the following ways: 

  • Predicting sales in the near and long term. 
  • Understanding demand and supply. 
  • Understanding inventory levels. 
  • Review and understand how variables impact all these factors. 

However, businesses can use regression methods to understand the following: 

  • Why did the customer service calls drop in the past months? 
  • How the sales will look like in the next six months? 
  • Which ‘marketing promotion’ method to choose? 
  • Whether to expand the business or to create and market a new product. 

The ultimate benefit of regression analysis is to determine which independent variables have the most effect on a dependent variable. It also helps to determine which factors can be ignored and those that should be emphasized. 

Let us now understand what regression analysis is and its associated variables. 

In addition, you can read more about measures of dispersion here.  

What is regression analysis?

According to the renowned American mathematician John Tukey, “An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem". This is precisely what regression analysis strives to achieve.  

Regression analysis is basically a set of statistical processes which investigates the relationship between a dependent (or target) variable and an independent (or predictor) variable. It helps assess the strength of the relationship between the variables and can also model the future relationship between the variables. 

Regression analysis is widely used for prediction and forecasting, which overlaps with Machine Learning. On the other hand, it is also used for time series modeling and finding causal effect relationships between variables. For example, the relationship between rash driving and the number of road accidents by a driver can be best analyzed using regression.  

Let us now understand regression with an example. 

Meaning of Regression

Let us understand the concept of regression with an example. 

Consider a situation where you conduct a case study on several college students. We will understand if students with high CGPA also get a high GRE score. 

Our first job is to collect the details of the GRE scores and CGPAs of all the students of a college in a tabular form. The GRE scores and the CGPAs are listed in the 1st and 2nd columns, respectively. 

To understand the relationship between CGPA and GRE score, we need to draw a scatter plot.  

Here, we can see a linear relationship between CGPA and GRE score in the scatter plot. This indicates that if the CGPA increases, the GRE scores also increase. Thus, it would also mean that a student with a high CGPA is likely to have a greater chance of getting a high GRE score. 

However, if a question arises like “If the CGPA of a student is 8.51, what will be the GRE score of the student?”. We need to find the relationship between these two variables to answer this question. This is the place where Regression plays its role. 

In a regression algorithm, we usually have one dependent variable and one or more than one independent variable where we try to regress the dependent variable "Y" (in this case, GRE score) using the independent variable "X" (in this case, CGPA). In layman's terms, we are trying to understand how the value of "Y" changes concerning the change in "X". 

Let us now understand the concept of dependent and independent variables. 

Dependent and Independent variables 

In data science, variables refer to the properties or characteristics of certain events or objects. 

There are mainly two types of variables while performing regression analysis which is as follows: 

  • Independent variables – These variables are manipulated or are altered by researchers whose effects are later measured and compared. They are also referred to as predictor variables. They are called predictor variables because they predict or forecast the values of dependent variables in a regression model. 
  • Dependent variables  These variables are the type of variable that measures the effect of the independent variables on the testing units. It is safer to say that dependent variables are completely dependent on them. They are also referred to as predicted variables. They are called because these are the predicted or assumed values by the independent or predictor variables. 

When an individual is looking for a relationship between two variables, he is trying to determine what factors make the dependent variable change. For example, consider a scenario where a student's score is a dependent variable. It could depend on many independent factors like the amount of study he did, how much sleep he had the night before the test, or even how hungry he was during the test.  

In data models, independent variables can have different names such as “regressors”, “explanatory variable”, “input variable”, “controlled variable”, etc. On the other hand, dependent variables are called “regressand,” “response variable”, “measured variable,” “observed variable,” “responding variable,” “explained variable,” “outcome variable,” “experimental variable,” or “output variable.” 

Below are a few examples to understand the usage and significance of dependent and independent variables in a wider sense: 

  • Suppose you want to estimate the cost of living of a person using a regression model. In that case, you need to take independent variables as factors such as salary, age, marital status, etc. The cost of living of a person is highly dependent on these factors. Thus, it is designated as the dependent variable. 
  • Another scenario is in the case of a student's poor performance in an examination. The independent variable could be factors, for example, poor memory, inattentiveness in class, irregular attendance, etc. Since these factors will affect the student's score, the dependent variable, in this case, is the student's score.  
  • Suppose you want to measure the effect of different quantities of nutrient intake on the growth of a newborn child. In that case, you need to consider the amount of nutrient intake as the independent variable. In contrast, the dependent variable will be the growth of the child, which can be calculated by factors such as height, weight, etc. 

Let us now understand the concept of a regression line. 

What is the difference between Regression and Classification?

Regression and Classification both come under supervised learning methods, which indicate that they use labelled training datasets to train their models and make future predictions. Thus, these two methods are often classified under the same column in machine learning.

However, the key difference between them is the output variable. In regression, the output tends to be numerical or continuous, whereas, in classification, the output is categorical or discrete in nature.  

Regression and Classification have certain different ways to evaluate the predictions, which are as follows: 

  • Regression predictions can be interpreted using root mean squared error, whereas classification predictions cannot. 
  •  Classification predictions can be evaluated using accuracy, whereas, on the other hand, regression predictions cannot be evaluated using the same. 

Conclusively, we can use algorithms like decision trees and neural networks for regression and classification with small alterations. However, some other algorithms are more difficult to implement for both problem types, for example, linear regression for regressive predictive modeling and logistic regression for classification predictive modeling. 

What is a Regression Line?

In the field of statistics, a regression line is a line that best describes the behaviour of a dataset, such that the overall distance from the line to the points (variable values) plotted on a graph is the smallest. In layman's words, it is a line that best fits the trend of a given set of data.  

Regression lines are mainly used for forecasting procedures. The significance of the line is that it describes the interrelation of a dependent variable “Y” with one or more independent variables “X”. It is used to minimize the squared deviations of predictions.  

If we take two variables, X and Y, there will be two regression lines: 

  • Regression line of Y on X: This gives the most probable Y values from the given values of X. 
  • Regression line of X on Y: This gives the most probable values of X from the given values of Y. 

The correlation between the variables X and Y depend on the distance between the two regression lines. The degree of correlation is higher if the regression lines are nearer to each other. In contrast, the degree of correlation will be lesser if the regression lines are farther from each other.  

If the two regression lines coincide, i.e. only a single line exists, correlation tends to be either perfect positive or perfect negative. However, if the variables are independent, then the correlation is zero, and the lines of regression will be at right angles.  

Regression lines are widely used in the financial sector and business procedures. Financial Analysts use linear regression techniques to predict prices of stocks, commodities and perform valuations, whereas businesses employ regressions for forecasting sales, inventories, and many other variables essential for business strategy and planning. 

What is the Regression Equation? 

In statistics, the Regression Equation is the algebraic expression of the regression lines. In simple terms, it is used to predict the values of the dependent variables from the given values of independent variables.  

Let us consider one regression line, say Y on X and another line, say X on Y, then there will be one regression equation for each regression line: 

  • Regression Equation of Y on X: 

This equation depicts the variations in the dependent variable Y from the given changes in the independent variable X. The expression is as follows: 

Ye = a + bX 

Where,  

  • Ye is the dependent variable, 
  • X is the independent variable, 
  • a and are the two unknown constants that determine the position of the line. 

The parameter “a” indicates the distance of a line above or below the origin, i.e. the level of the fitted line, whereas parameter "b" indicates the change in the value of the independent variable Y for one unit of change in the dependent variable X. 

The parameters "a" and "b" can be calculated using the least square method. According to this method, the line needs to be drawn to connect all the plotted points. In mathematical terms, the sum of the squares of the vertical deviations of observed Y from the calculated values of Y is the least. In other words, the best-fitted line is obtained when ∑ (Y-Ye)2 is the minimum. 

To calculate the values of parameters “a” and “b”, we need to simultaneously solve the following algebraic equations: 

 Y = Na + b  X 

 XY = a  X + b  X2 

  • Regression Equation of X on Y: 

This equation depicts the variations in the independent variable Y from the given changes in the dependent variable X. The expression is as follows: 

Xe = a + bY  

Where,  

  • Xe is the dependent variable, 
  • Y is the independent variable, 
  • a and are the two unknown constants that determine the position of the line. 

Again, in this equation, the parameter “a” indicates the distance of a line above or below the origin, i.e. the level of the fitted line, whereas parameter "b" indicates the slope, i.e. change in the value of the dependent variable X for a unit of change in the independent variable Y. 

To calculate the values of parameters “a” and “b” in this equation, we need to simultaneously solve the following two normal equations: 

 X = Na + b  Y 

 XY = a  Y + b  Y2 

Please note that the regression lines can be completely determined only if we obtain the constant values “a” and “b”. 

How does Linear Regression work?

Linear Regression is a Machine Learning algorithm that allows an individual to map numeric inputs to numeric outputs, fitting a line into the data points. It is an approach to modeling the relationship between one or more variables. This allows the model to able to predict outputs. 

Let us understand the working of a Linear Regression model using an example. 

Consider a scenario where a group of tech enthusiasts has created a start-up named Orange Inc. Now, Orange has been booming since 2016. On the other hand, you are a wealthy investor, and you want to know whether you should invest your money in Orange in the next year or not. 

Let us assume that you do not want to risk a lot of money, so you buy a few shares. Firstly, you study the stock prices of Orange since 2016, and you see the following figure: 

Regression Analysis And Its Techniques in Data Science

It is indicative that Orange is growing at an amazing rate where their stock price has gone from 100 dollars to 500 dollars in only three years. Since you want your investment to boom along with the company's growth, you want to invest in Orange in the year 2021. You assume that the stock price will fall somewhere around $500 since the trend will likely not go through a sudden change. 

Based on the information available on the stock prices of the last couple of years, you were able to predict what the stock price is going to be like in 2021.  

You just inferred your model in your head to predict the value of Y for a value of X that is not even in your knowledge. This mental method you undertook is not accurate anyway because you were not able to specify what exactly will be the stock price in the year 2021. You just have an idea that it will probably be above 500 dollars. 

This is where Regression plays its role. The task of Regression is to find the line that best fits the data points on the plot so that we can calculate where the stock price is likely to be in the year 2021.  

Regression Analysis And Its Techniques in Data Science

Let us examine the Regression line (in red) by understanding its significance. By making some alterations, we obtained that the stock price of Orange is likely to be a little higher than 600 dollars by the year 2021. 

This example is quite oversimplified, so let us examine the process and how we got the red line on the next plot. 

Training the Regressor 

The example mentioned above is an example of Univariate Linear Regression since we are trying to understand the change in an independent variable X to one dependent variable, Y. 

Any regression line on a plot is based on the formula: 

f(X) = MX + B  

Where, 

  • is the slope of the line, 
  • B is the y-intercept that allows the vertical movement of the line, 
  • And is the function’s input variable. 

In the field of Machine Learning, the formula is as follows: 

h(X) = W0 + W1X  

Where, 

  • W0 and W1 are the weights, 
  • X is the input variable, 
  • h(X) is the label or the output variable. 

Regression works by finding the weights W0 and W1 that lead to the best-fitting line for the input variable X. The best-fitted line is obtained in terms of the lowest cost. 

Now, let us understand what does cost means here. 

The cost function

Depending upon the Machine Learning application, the cost could take different forms. However, in a generalized view, cost mainly refers to the loss or error that the regression model yields in its distance from the original training dataset. 

In a Regression model, the cost function is the Squared Error Cost: 

J(W0,W1) = (1/2n) Σ { (h(Xi) - Ti)2} for all i =1 until i = n    

Where, 

  • J(W0, W1) is the total cost of the model with weights W0 and W1, 
  • h(Xi) is the model’s prediction of the independent variable Y at feature X with index  i, 
  • Ti is the actual y-value at index i, 
  • and n refers to the total number of data points in the dataset. 

Regression Analysis And Its Techniques in Data Science

The cost function is used to obtain the distance between the y-value the model predicted and the actual y-value in the data set. Then, the function squares this distance and divides it by the number of data points, resulting in the average cost. The 2 in the term ‘(1/2n)’ is merely to make the differentiation process in the cost function easier.  

Training the dataset 

Training a regression model uses a Learning Algorithm to find the weights W0 and W1 that will minimize the cost and plug them into the straight-line function to obtain the best-fitted line. The pseudo-code for the algorithm is as follows: 

Repeat until convergence { 
    temp0 := W0 - a.((d/dW0) J(W0,W1)) 
    temp1 := W1 - a.((d/dW1) J(W0,W1)) 
    W0 = temp0 
    W1 = temp1 
} 

Here, (d/dW0and (d/dW1refer to the partial derivatives of J(W0,, W1concerning W0, and W1 respectively.  

The gist of the partial differentiation is basically the derivatives: 

  • (d/dW0) J(W0,W1) = W0 + W1.X - T 
  • (d/dW1) j(W0,W1) = (W0 + W1.X - T).X 

Implementing the Gradient Descent Learning algorithm will result in a model with minimum cost. The weights that led to the minimum cost are dealt with as the final values for the line function h(X) = W0 + W1X.  

Goodness-of-Fit in a Regression Model 

The Regression Analysis is a part of the linear regression technique. It examines an equation that lessens the distance between the fitted line and all data points. Determining how well the model fits the data is crucial in a linear model. 

The general idea is that if the deviations between the observed values and the predicted values of the linear model are small and unbiased, the model has well-fit data.  

In technical terms, “Goodness-of-fit” is a mathematical model describing the differences between the observed and expected values or how well the model fits a set of observations. This measure can be used in statistical hypothesis testing.

How do businesses use Regression Analysis? 

Regression Analysis is a statistical technique used to evaluate the relationship between two or more independent variables. Organizations use regression analysis to understand the significance of their data points and use analytical techniques to make better decisions.

Business Analysts and Data Professionals use this statistical tool to delete unwanted variables and select the significant ones. There are numerous ways that businesses use regression analysis. Let us discuss some of them below. 

1. Decision-making

Businesses need to make better decisions to run smoothly and efficiently, and it is also necessary to understand the effects of the decision taken. They collect data on various factors such as sales, investments, expenditures, etc. and analyze them for further improvements. 

Organizations use the Regression Analysis method by making sense of the data and gathering meaningful insights. Business analysts and data professionals use this method to make strategic business decisions.

2. Optimization of business 

The main role of regression analysis is to convert the collected data into actionable insights. The old-school techniques like guesswork and assuming a hypothesis have been eliminated by organizations. They are now focusing on adopting data-driven decision-making techniques, which improves the work performance in an organization. 

This analysis helps the management sectors in an organization to take practical and smart decisions. The huge volume of data can be interpreted and understood to gain efficient insights. 

3. Predictive Analysis 

Businesses make use of regression analysis to find patterns and trends. Business Analysts build predictions about future trends using historical data. 

Regression methods can also go beyond predicting the impact on immediate revenue. Using this method, you can forecast the number of customers willing to buy a service and use that data to estimate the workforce needed to run that service. 

Most insurance companies use regression analysis to calculate the credit health of their policyholders and the probable number of claims in a certain period. 

Predictive Analysis helps businesses to: 

  • Minimize costs 
  • Minimize the number of required tools 
  • Provide fast and efficient results 
  • Detect fraud 
  • Risk Management 
  • Optimize marketing campaigns 

4. Correcting errors 

Regression Analysis is not only used for predicting trends, but it is also useful to identify errors in judgements. 

Let us consider a situation where the executive of an organization wants to increase the working hours of its employees and make them work extra time to increase the profits. In such a case, regression analysis analyses all the variables and it may conclude that an increase in the working hours beyond their existing time of work will also lead to an increase in the operation expense like utilities, accounting expenditures, etc., thus leading to an overall decrease in the profit.   

Regression Analysis provides quantitative support for better decision-making and helps organizations minimize mistakes. 

5. New Insights 

Organizations generate a large amount of cluttered data that can provide valuable insights. However, this vast data is useless without proper analysis. 

Regression analysis is responsible for finding a relationship between variables by discovering patterns not considered in the model. 

For example, analyzing data from sales systems and purchase accounts will result in market patterns such as increased demand on certain days of the week or at certain times of the year. You can maintain optimal stock and personnel using the information before a demand spike arises. 

The guesswork gets eliminated by data-driven decisions. It allows companies to improve their business performance by concentrating on the significant areas with the highest impact on operations and revenue. 

Use cases of Regression Analysis

Pharmaceutical companies 

Pharmaceutical organizations use regression analysis to analyze the quantitative stability data for the retest period or estimate shelf life. 

In this method, we find the nature of the relationship between an attribute and time. We determine whether the data should be transformed for linear regression analysis or non-linear regression analysis using the analyzed data. 

  • Finance

The simple linear regression technique is also called the Ordinary Least Squares or OLS method. This method provides a general explanation for placing the line of the best fit among the data points.  

This particular tool is used for forecasting and financial analysis. You can also use it with the Capital Asset Pricing Model (CAPM), which depicts the relationship between the risk of investing and the expected return. 

  • Credit Card 

Credit card companies use regression analysis to analyze various factors such as customer's risk of credit default, prediction of credit balance, expected consumer behaviour, and so on. With the help of the analyzed information, the companies apply specific EMI options and minimize the default among risky customers. 

When Should I Use Regression Analysis? 

Regression Analysis is mainly used to describe the relationships between a set of independent variables and the dependent variables. It generates a regression equation where the coefficients correspond to the relationship between each independent and dependent variable.  

Analyze a wide variety of relationships 

You can use the method of regression analysis to perform many things, for example: 

  • To model multiple independent variables. 
  • Include continuous and categorical variables. 
  • Use polynomial terms for curve fitting. 
  • Evaluate interaction terms to examine whether the effect of one independent variable is dependent on the value of another variable.  

Regression Analysis can untangle very critical problems where the variables are entwined. Consider yourself to be a researcher studying any of the following: 

  • What impact does socio-economic status and race have on educational achievement? 
  • Do education and IQ affect earnings? 
  • Impact of exercise habits and diet affect weight. 
  • Do drinking coffee and smoking cigarettes reduce the mortality rate? 
  • Does a particular exercise have an impact on bone density? 

These research questions create a huge amount of data that entwines numerous independent and dependent variables and question their influence on each other. It is an important task to untangle this web of related variables and find out which variables are statistically essential and the role of each of these variables. To answer all these questions and rescue us in this game of variables, we need to take the help of regression analysis for all the scenarios. 

Control the independent variables 

Regression analysis describes how the changes in each independent variable are related to the changes in the dependent variable and how it is responsible for controlling every variable in a regression model. 

In the process of regression analysis, it is crucial to isolate the role of each variable. Consider a scenario where you participated in an exercise intervention study. You aimed to determine whether the intervention was responsible for increasing the subject's bone mineral density. To achieve an outcome, you need to isolate the role of exercise intervention from other factors that can impact the bone density, which can be the diet you take or any other physical activity. 

To perform this task, you need to reduce the effect of the unsupportive variables. Regression analysis estimates the effect the change in one dependent variable has on the dependent variables while all other independent variables are constant. This particular process allows you to understand each independent variable's role without considering the other variables in the regression model. 

Now, let us understand how regression can help control the other variables in the process. 

According to a recent study on the effect of coffee consumption on mortality, the initial results depicted that the higher the intake of coffee, the higher is the risk of death. However, researchers did not include the fact that most coffee drinkers smoke in their first model. After smoking was included in the model, the regression results were quite different from the initial results. It depicted that coffee intake lowers the risk of mortality while smoking increases it. 

This model isolates the role of each variable while holding the other variables constant. You can examine the effect of coffee intake while controlling the smoking factor. On the other hand, you can also look at smoking while controlling for coffee intake. 

This particular example shows how omitting a significant variable can produce misleading results and causes it to be uncontrolled. This warning is mainly applicable for observational studies where the effects of omitted significant variables can be unbalanced. This omitted variable bias can be minimized in a randomization process where true experiments tend to shell out the effects of these variables in an equal manner. 

What are Residuals in Regression Analysis? 

Residuals identify the deviation of observed values from the expected values. They are also referred to as error or noise terms. It gives an insight into how good our model is against the actual value, but there are no real-life representations of residual values. 

Calculating the real values of intercept, slope, and residual terms can be a complicated task. However, the Ordinary Least Square (OLS) regression technique can help us speculate on an efficient model.  The technique minimizes the sum of the squared residuals. With the help of the residual plots, you can check whether the observed error is consistent with stochastic error (differences between the expected and observed values must be random and unpredictable). 

What are the Linear model assumptions in Regression Analysis? 

Regression Analysis is the first step in the process of predictive modeling. It is quite easy to implement, and its syntax and parameters do not create any kind of confusion. However, the purpose of regression analysis is not just solved by running a single line of code. It is much more than that. 

The function plot(model_name) returns four plots in the R programming language. Each of these plots provides essential information about the dataset. Most beginners in the field are unable to trace the information. But once you understand these plots, you can bring important improvements to your regression model. 

For significant improvements in your regression model, it is also crucial to understand the assumptions you need to take in your model and how you can fix them if any assumption gets violated. 

The four assumptions that should be met before conducting linear regression are as follows: 

  1.  Linear Relationship: A linear relationship exists between the independent variable, x, and the dependent variable, y.  
  2.  Independence: The residuals in linear regression are independent. In other words, there is no correlation between consecutive residuals in time series data. 
  3.  Homoscedasticity: Residuals have constant variance at every level of X. 
  4.  Normality: The residuals of the model are normally distributed. 

Assumption 1: Linear Relationships 

Explanation 

The first assumption in Linear regression is that there is a linear relationship between the independent variable X and the dependent variable Y. 

How to determine if this assumption is met 

The quickest and easiest way to detect this assumption is by creating a scatter plot of X vs Y. By looking at the scatter plot, you can have a visual representation of the linear relationship between the two variables. If the points in the plot could fall along a straight line, then there exists some type of linear relationship between the variables, and this assumption is met. 

For example, consider this first plot below. The points in the plot look like they fall roughly on a straight line, which indicates that there exists a linear relationship between X and Y: 

Regression Analysis And Its Techniques in Data Science

However, there doesn’t appear to be a linear relationship between X and Y in this second plot below:  

And in this third plot, there appears to be a clear relationship between X and Y, but a linear relationship between:

Regression Analysis And Its Techniques in Data Science

What to do if this assumption is violated 

If you create a scatter plot between X and Y and do not find any linear relationship between the two variables, then you can do two things: 

  • You can apply a non-linear transformation to the dependent or independent variables. Common examples might include taking the log, the square root, or the reciprocal of the independent and dependent variable. 
  • You can add another independent variable to the regression model. If the plot of X vs Y has a parabolic shape, then adding Xas an additional independent variable in the linear regression model might make sense. 

Assumption 2: Independence 

Explanation 

The second assumption of linear regression is that the residuals should be independent. Its relevance can be seen while working with time-series data. In an ideal manner, a pattern among consecutive residuals is not what we want. For example, in a time series model, the residuals should not grow steadily along with time.  

How to determine if this assumption is met 

To determine if this assumption is met, we need to have a scatter plot of residuals vs time and look at the residual time series plot. In an ideal plot, the residual autocorrelations should fall within the 95% confidence bands around zero, located at about +/- 2-over the square root on n, where n denotes the sample size.  

You can also perform the Durbin-Watson test to formally examine if this assumption is met. 

What to do if this assumption is violated 

If this assumption is violated, you can do three things which are as follows: 

  • If there is a positive serial correlation, you can add lags of the independent variable or dependent variable to the regression model. 
  •  If there is a negative serial correlation, check that none of the variables has differences. 
  •  If there is a seasonal correlation, consider adding a seasonal dummy variable into your regression model.  

Assumption 3: Homoscedasticity

Explanation  

The third assumption of linear regression is that the residuals should have constant variance at every level of X. This property is called homoscedasticity. When homoscedasticity is not present, the residuals suffer from heteroscedasticity. 

The outcome of the regression analysis becomes hard to trust when heteroscedasticity is present in the model. It increases the variance of the regression coefficient estimates, but the model does not recognize this fact. This makes the model declare that a term in the model is significantly crucial, but it is not. 

How to determine if this assumption is met 

To determine if this assumption is met, we need to have a scatter plot of fitted values vs residual plots. To achieve this, you need to fit a regression line into a data set.  

Below is a scatterplot showing a typical fitted value vs residual plot in which heteroscedasticity is present: 

Regression Analysis And Its Techniques in Data Science

You can observe how the residuals become much more spread out as the fitted values get larger. The “cone” shape is a classic sign of heteroscedasticity:  

Regression Analysis And Its Techniques in Data Science

What to do if this assumption is violated 

If this assumption is violated, you can do three things which are as follows: 

  • Transform the dependent variable: The most common transformation is simply taking the dependent variable's log. Consider if you are using population size as an independent variable to predict the number of flower shops in a city as the dependent variable. You need to use population size to predict the number of flower shops in a city. It causes heteroscedasticity to go away.  
  • Redefine the dependent variable: One common way is to use a rate rather than the raw value. Consider the previous example. In that case, use population size to predict the number of flower shops per capita instead. This reduces the variability that naturally occurs among larger populations.  
  • Use weighted regression: The third way to fix heteroscedasticity is to use weighted regression. In this regression method, we assign a weight to each data point depending on the variance of its fitted value, giving small weights to data points having higher variances, which shrinks their squared residuals. When the proper weights are used, the problem of heteroscedasticity gets eradicated. 

Assumption 4: Normality 

Explanation 

We need to take the last assumption that the residuals should be normally distributed. 

How to determine if this assumption is met 

To determine if this assumption is met, there are two common ways to achieve that: 

1. Use Q-Q plots to examine the assumption visually. Also known as the quantile-quantile plot, it is used to determine whether or not the residuals of the regression model follow a normal distribution. The normality assumption is achieved if the points on the plot roughly form a straight diagonal line as follows: 

Regression Analysis And Its Techniques in Data Science

However, this Q-Q plot below shows when the residuals clearly deviate from a straight diagonal line, they do not follow a normal distribution:  

Regression Analysis And Its Techniques in Data Science

2. Some other formal statistical tests to check the normality assumption are Shapiro-Wilk, Kolmogorov-Smirnov, Jarque-Barre, and D'Agostino-Pearson.  

These tests however have a limitation as they are used only when there are large sample sizes and it often results that the residuals are not normal. 

Therefore, graphical techniques like Q-Q plots are easier to check the normality assumption and are also more preferable. 

What to do if this assumption is violated

If this assumption is violated, you can do two things which are as follows: 

  • Firstly, examine if outliers are present and exist, make sure they are real values and aren’t data errors. Also, verify that any outliers aren’t having a large impact on the distribution. 
  • Secondly, you can apply a non-linear transformation to the independent and/or dependent variables. Common examples include taking the log, the square root, or the reciprocal of the independent and/or dependent variable. 

How to perform a simple linear regression?

The formula for a simple linear regression is: 

Y = B0 + B1X + e 

Where, 

  • Y refers to the predicted value of the dependent variable Y for any given value of the independent variable X. 
  • B0 denotes the intercept, i.e. the predicted value of y when the x is 0. 
  • B1 denotes the regression coefficient, i.e. how much we expect the value of y to change as the value of x increases. 
  • X refers to the independent variable, or the variable we expect is influencing y). 
  • e denotes the error estimate, i.e. how much variation exists in our regression coefficient estimate. 

The Linear regression model's task is to find the best-fitted line through the data by looking out for the regression coefficient B1 that minimizes the total error estimate e of the model. 

Simple linear regression in R 

R is a free statistical programming language that most data professionals use very powerful and widely. Let us consider a dataset of income and happiness that we will use to perform regression analysis.

The first task is to load the income.data dataset into the R environment, and then generate a linear model describing the relationship between income and happiness by the command as follows: 

income.happiness.lm <- lm(happiness ~ income, data = income.data) 

The code above mentioned takes the data that has been collected using data = income.data. Then, it estimates the impact of the independent variable income on the dependent variable happiness using the linear model equation:  

lm()  

Interpreting the results

To obtain and visualize the results of the simple linear regression model, you can use the summary() function in R by executing the following code: 

summary(income.happiness.lm) 

What this function does is it takes the essential factors from the linear model and puts them into a tabular form, which looks like this: 

Regression Analysis And Its Techniques in Data Science

According to the output table, the first thing it does is repeat the formula used to generate the results (‘Call’). The next thing it does is summarize the model residuals (‘Residuals’), which explains how well the model fits the real data. 

Next, you can see the Coefficients table. The first row of this table provides the estimates of the y-intercept, and the second row provides the regression coefficient of the model.

The first row of the Coefficients’ table is labelled (Intercept), the Y-intercept of the regression equation, with a value of 0.20. Now, if you want to predict the happiness values across the range of observed values of income, you need to insert this into your regression equation : 

happiness = 0.20 + 0.71*income ± 0.018  

The next row in the ‘Coefficients’ table is income that describes the estimated effect of income on reported happiness. 

The Estimate column is the estimated effect or the regression coefficient  (R2) value. The table (0.713) indicates that for every unit increase in income, there is a corresponding 0.71-unit increase in reported happiness. 

The column of Std. Error displays the standard error of the estimate, which shows how much variation there is in the estimate of the relationship between income and happiness. 

The t value column displays the test statistic. The test statistic used in a  linear regression model is the t-value from a two-sided t-test unless specified otherwise. The results depend on the test statistic. The larger the test statistic, the less probable that our results occurred by chance. 

The Pr(>| t |) column displays the p-value, which tells us how probable we are to see the estimated effect of income on happiness considering the null hypothesis of no effect were true. 

We can reject the null hypothesis since the p-value is very low (p < 0.001), and finally, we can conclude that income has a statistically crucial effect on happiness. 

The most important thing here in the linear regression model is the p-value. In this example, it is quite significant (p < 0.001), which shows that this model is a good fit for the observed data. 

Presenting the results 

While presenting your results, you should include the regression coefficient, standard error of the estimate, and the p-value. You should also interpret your numbers so that readers can have a clear understanding of the regression coefficient: 

A significant relationship (p < 0.001) has been found between income and happiness (R2 = 0.71 ± 0.018), with a 0.71-unit increase in reported happiness for every $10,000 increase in income. 

For a simple linear regression, you can simply plot the observations on the x and y-axis of a scatter plot and then include the regression line and regression function.

Regression Analysis And Its Techniques in Data Science

What is multiple regression analysis?

Multiple Regression is an extension of simple linear regression and is used to estimate the relationship between two or more independent variables and one dependent variable. 

You can perform multiple regression analysis to know: 

  • The strength of the relationship between one or more independent variables and one dependent variable. For example, you can use it to understand whether the exam performance can be predicted based on revision time, test anxiety, lecture attendance, and gender.  
  • The overall fit, i.e. variance of the model and the relative impact of each of the predictors to the total variance explained. For example, you might want to know how much of the variation in the student’s exam performance can be understood by revision time, test anxiety, lecture attendance, gender, and the relative impact of each independent variable in explaining the variance. 

How to perform multiple linear regression? 

The formula for multiple linear regression is: 

Y = B0 + B1X1 + … + BnXn + e 

Where, 

  • Y refers to the predicted value of the dependent variable Y for any given value of the independent variable X. 
  • B0 denotes the intercept, i.e. the predicted value of y when the x is 0. 
  • B1X denotes the regression coefficient (B1), i.e. how much we expect the value of Y to change as the value of X increases. 
  • ... does the same for all the independent variables we want to test. 
  • BnXn refers to the regression coefficient of the last independent variable 
  • e denotes the error estimate of the model, i.e. how much variation exists in our estimate of the regression coefficient. 

It is the task of the Multiple Linear regression model to find the best-fitted line through the data by calculating the following three things: 

  • The regression coefficients will lead to the least error in the overall multiple regression model. 
  • The t-statistic of the overall regression model. 
  • The associated p-value  

The multiple regression model also calculates the t-statistic and p-value for each regression coefficient. 

Multiple linear regression in R 

Let us consider a dataset of the heart and other factors that affect the functioning of our heart to perform multiple regression analyses. 

The first task is to load the heart.data dataset into the R environment, and then generate a linear model describing the relationship between heart disease and biking to work by the command as follows: 

heart.disease.lm<-lm(heart.disease ~ biking + smoking, data = heart.data) 

The code above mentioned takes the data that has been collected using data = heart.data. Then, it estimates the impact of the independent variable biking and smoking on the dependent variable heart disease using the linear model equation:  

lm()  

Interpreting the results

To obtain and visualize the results of the simple linear regression model, you can use the summary() function in R by executing the following code: 

summary(heart.disease.lm) 

What this function does is it takes the essential factors from the linear model and puts them into a tabular form, which looks like this:

Regression Analysis And Its Techniques in Data Science

According to the output table, the first thing it does is repeat the formula used to generate the results (‘Call’). The next thing it does is summarize the model residuals (‘Residuals’), which explains how well the model fits the real data.  

If the residuals are roughly centred around zero and with a similar spread on either side, then the model probably fits the assumption of heteroscedasticity. 

Next, you can see the ‘Coefficients’ table. The first row of this table provides the estimates of the y-intercept, and the second row provides the regression coefficient of the model. 

The first row of the ‘Coefficients’ table is labelled (Intercept), the Y-intercept of the regression equation. Now, if you want to predict the biking and smoking values across the range of observed values of heart, you need to insert this into your regression equation : 

heart disease = 15 + (-0.2*biking) + (0.178*smoking) ± e

The Estimate column is the estimated effect or the regression coefficient  (R2) value. The estimates in the table tell us that for every 1% increase in biking to work, there is an associated 0.2 % decrease in heart disease. For every 1% increase in smoking, there is an associated 0.17 % increase in heart disease. 

The column of Std. Error displays the standard error of the estimate, which shows how much variation there is in the estimate of the relationship between income and happiness. 

The t value column displays the test statistic. The test statistic used in a  linear regression model is the t-value from a two-sided t-test unless specified otherwise. The results depend on the test statistic. The larger the test statistic, the less probable that our results occurred by chance. 

The Pr(>| t |) column displays the p-value, which tells us how probable we are to see the estimated effect of income on happiness considering the null hypothesis of no effect were true. 

We can reject the null hypothesis since the p-value is very low (p < 0.001), and finally, we can conclude that both - biking to work and smoking - have influenced rates of heart disease. 

The most important thing here in the linear regression model is the p-value. In this example, it is quite significant (p < 0.001), which shows that this model is a good fit for the observed data. 

Presenting the results 

While presenting your results, you should include the regression coefficient, standard error of the estimate, and the p-value. You should also interpret your numbers in the proper context so that readers can have a clear understanding of the regression coefficient:  

In our survey of 500 towns, we found significant relationships between the frequency of biking to work and the frequency of heart disease and the frequency of smoking and heart disease (p < 0.001 for each). Specifically, we found a 0.2% decrease (± 0.0014) in the frequency of heart disease for every 1% increase in biking and a 0.178% increase (± 0.0035) in the frequency of heart disease for every 1% increase in smoking. 

For multiple linear regression, you can simply plot the observations on the X and Y-axis of a scatter plot and then include the regression line and regression function: 

Regression Analysis And Its Techniques in Data Science

In this example, we have calculated the predicted values of the dependent variable heart disease across the observed values for the percentage of people biking to work. 

However, to include the effect of smoking on the independent variable heart disease, we had to calculate the predicted values by holding the variable smoking as constant at the minimum, mean, and maximum observed smoking rates. 

What is R-squared in Regression Analysis? 

In data science, R-squared (R2) is the coefficient of determination or the coefficient of multiple determination in case of multiple regression.  

In the linear regression model, R-squared acts as an evaluation metric to evaluate the scatter of the data points around the fitted regression line. It recognizes the percentage of variation of the dependent variable. 

R-squared and the Goodness-of-fit 

R-squared is the proportion of variance in the dependent variable that the independent variable can explain.

Regression Analysis And Its Techniques in Data Science

The value of R-squared stays between 0 and 100%: 

  • 0% corresponds to a model that does not explain the variability of the response data around its mean. The mean of the dependent variable helps predict the dependent variable and the regression model. 
  • On the other hand, 100% corresponds to a model that explains all the variability of the response variable around its mean. 

If your value of R is large, you have a better chance of your regression model fitting the observations.

Although you get essential insights about the regression model in this statistical measure, you should not depend on it for the complete assessment of the model. It lacks information about the relationship between the dependent and the independent variables. 

It also does not inform about the quality of the regression model. Hence, as a user, you should always analyze Rand other variables and then derive conclusions about the regression model. 

Visual Representation of R-squared 

You can visually demonstrate the plots of fitted values by observed values in a graphical manner. It illustrates how R-squared values represent the scatter around the regression line.  

Regression Analysis And Its Techniques in Data Science

As observed in the pictures above, the value of R-squared for the regression model on the left side is 17%, and for the model on the right is 83%. When the variance accounts to be high in a regression model, the data points tend to fall closer to the fitted regression line.  

However, a regression model with an R2 of 100% is an ideal scenario that is impossible. In such a case, the predicted values equal the observed values, leading all the data points to fall exactly on the regression line.  

Interpretation of R-squared 

The simplest interpretation of R-squared is how good the regression model fits the observed data values. Let us loot at an example to understand this. 

Consider a model where the  R2  value is 70%. This would mean that the model explains 70% of the fitted data in the regression model. Usually, when the R value is high, it suggests a better fit for the model. 

The correctness of the statistical measure does not only depends on R2. Still, it can depend on other several factors like the nature of the variables, the units on which the variables are measured, etc. So, a high R-squared value is not always likely for the regression model and can indicate problems too.

A low R-squared value is a negative indicator for a model in general. However, if we consider the other factors, a low Rvalue can also result in a good predictive model. 

Calculation of R-squared 

R- squared can be evaluated using the following formula:  

Regression Analysis And Its Techniques in Data Science

Where: 

  • SSregression – Explained sum of squares due to the regression model. 
  • SStotal  The total sum of squares. 

The sum of squares due to regression assesses how well the model represents the fitted data. The total sum of squares measures the variability in the data used in the regression model.

Now let us come back to the earlier situation where we have two factors: the number of hours of study per day and the score in a particular exam to understand the calculation of R-squared more effectively. Here, the target variable is represented by score and the independent variable by the number of study hours per day.  

Regression Analysis And Its Techniques in Data Science

In this case, we will need a simple linear regression model and the equation of the model will be as follows:  

ŷ = w1x1 + b  

The parameters wand can be calculated by reducing the squared error over all the data points. The following equation is called the least square function:

minimize ∑(yi –  w1x1i – b) 

Regression Analysis And Its Techniques in Data Science

Now, R-squared calculates the amount of variance of the target variable explained by the model, i.e. function of the independent variable. 

However, to achieve that, we need to calculate two things: 

  • Variance of the target variable: 

var(avg) = ∑(yi – Ӯ)2 

  • The variance of the target variable around the best-fit line: 

var(model) = ∑(yi – ŷ)2

Regression Analysis And Its Techniques in Data Science

Finally, we can calculate the equation of R-squared as follows:  

R2 = 1 – [var(model)/var(avg)] = 1 -[∑(yi – ŷ)2/∑(yi – Ӯ)2]    

What are the different types of regression analysis?   

Other than simple linear regression and multiple linear regression, there are mainly 5 types of regression techniques. Let us discuss them one by one.  

Polynomial Regression

In a polynomial regression technique, the power of the independent variable has to more than 1. The expression below shows a polynomial equation: 

y = a + bx2  

In this regression technique, the best-fitted line is a curve line instead of a straight line that fits into the data points. 

Regression Analysis And Its Techniques in Data Science

An important point to keep in mind while performing polynomial regression is, if you try to fit a polynomial of a higher degree to get a lower error, it might result in overfitting.  You should always plot the relationships to see the fit and always make sure that the curve fits the nature of the problem. An example to illustrate how plotting can help: 

Regression Analysis And Its Techniques in Data Science

Logistic Regression 

The logistic regression technique is used when the dependent variable is discrete in nature. For example, 0 or 1, true or false, etc. The target variable in this regression can have only two values and the relation between the target variable and the independent variable is denoted by a sigmoid curve. 

Regression Analysis And Its Techniques in Data Science

To measure the relationship between the target variable and independent variables,  

Logit function is used. The expression below shows a logistic equation: 

logit(p) = ln(p/(1-p)) = b+ b1X+ b2X+ b3X…. + bkXk 

Where,  

p denotes the probability of occurrence of the feature. 

Ridge Regression 

The Ridge Regression technique is usually used when there is a high correlation between the independent variables. This is because the least square estimates result in unbiased values when there are multi collinear data.  

However, if the collinearity is very high, there exists some bias value. Therefore, it is crucial to introduce a bias matrix in the equation of Ridge Regression. This regression method is quite powerful where the model is less susceptible to overfitting. 

The expression below shows a ridge regression equation: 

β = (X^{T}X + λ*I)^{-1}X^{T}y 

The lambda (λ) in the equation solves the issue of multicollinearity. 

Lasso Regression 

Lasso Regression is one of the types of regression in machine learning that is responsible for performing regularization and feature selection. It restricts the absolute size of the regression coefficient, due to which the coefficient value gets nearer to zero.

Regression Analysis And Its Techniques in Data Science

The feature selection method in Lasso Regression allows the selection of a set of features from the dataset to build the model. Only the required features are used in this regression, while others are made zero. This helps in avoiding overfitting in the model.  

If the independent variables are highly collinear, then this regression technique takes only one variable and makes other variables shrink to zero. 

The expression below shows a lasso regression equation: 

N^{-1}Σ^{N}_{i=1}f(x_{i}, y_{I}, α, β) 

Bayesian Regression

In the Bayesian Regression method, the Bayes theorem is used to determine the value of regression coefficients. In this linear regression technique, the posterior distribution of the features is evaluated other than finding the least-squares.  

Bayesian Linear Regression collaborates with Linear Regression and Ridge Regression but is more stable than simple Linear Regression. 

Regression Analysis And Its Techniques in Data Science

What are the terminologies used in Regression Analysis? 

When trying to understand the outcome of regression analysis, it is important to understand the key terminologies used to acknowledge the information.  

A comprehensive list of regression analysis terms used are described below: 

  • Estimator

An estimator is an algorithm for generating estimates of parameters when the relevant dataset is present. 

  • Bias

An estimate is said to be unbiased when its expectation is the same as the value of the parameter that is being estimated. On the other hand, if the expectation is the same as the value of the estimated parameter, it is said to be biased. 

  • Consistency

An estimator is consistent if the estimates it produces converge on the value of the true parameter considering the sample size increases without limit. For example, an estimator that produces estimates θ^ for some value of parameter θ, where is a small number. If the estimator is consistent, we can make the probability as close to 1.0 or as small as we like by drawing a sufficiently large sample.  

  • Efficiency

An estimator “A” is said to be more efficient than an estimator “B” when “A” has a smaller sampling variance, i.e. if the specific values of “A” are more tightly clustered around their expectation. 

  • Standard error of the Regression (SER)

It is defined as estimating the standard deviation of the error term in a regression model. 

  • Standard error of regression coefficient

It is defined as estimating the standard deviation of the sampling distribution for a particular coefficient term. 

  • P-value

P-value is the probability when the null hypothesis is considered true, of drawing sample data that are as adverse to the null as the data drawn, or more so. When the p-value is small, there are two possibilities for that – firstly, a low-probability unrepresentative sample is drawn, or secondly, the null hypothesis is false. 

  • Significance level

For a hypothesis test, the significance test is the smallest p-value for which the null hypothesis is not rejected. If the significance level is 1%, the null is rejected if and only if the p-value for the test is less than 0.01. The significance level can also be defined as the probability of making a type 1 error, i.e. rejecting a true null hypothesis. 

  • Multicollinearity: 

It is a situation where there is a high degree of correlation among the independent variables in a regression mod. In other words, a situation where some of the X values are close to being linear combinations of other X values. Multicollinearity occurs due to large standard errors and when the regression model cannot produce precise parameter estimates. This problem mainly occurs while estimating causal influences.

  • T-test

The t-test is a common test for the null hypothesis that Bi's particular regression parameter has some specific value. 

  • F-test

F-test is a method for jointly testing a set of linear restrictions on a regression model. 

  • Omitted variable bias

Omitted variable bias is a bias in estimating regression parameters. It generally occurs when a relevant independent variable is omitted from a model, and the omitted variable is correlated with one or more of the included variables. 

  • Log variables

It is a transformation method that allows the estimation of a non-linear model using the OLS method to exchange the natural log of a variable for the level of that variable. It is performed for the dependent variable and/or one or more independent variables. 

  • Quadratic terms

This is another common transformation method where both xand x2i are included as regressors. The estimated effect of xi on is calculated by finding the derivative of the regression equation concerning xi 

  • Interaction terms

These are the pairwise products of the "original" independent variables. The interaction terms allow for the possibility that the degree to which xi affects depends on the value of some other variable Xj. For example, the effect of experience on wages xi might depend on the gender xj of the worker. 

What are the tips to avoid common problems working with regression analysis? 

Regression is a very powerful statistical analysis that offers high flexibility but presents a variety of potential pitfalls. Let us see some tips to overcome the most common problems whilst working with regression analysis.

  • Tip 1:  Research Before Starting 

Before you start working with regression analysis, review the literature to understand the relevant variables, the relationships they have, and the expected coefficient signs and effect magnitudes. It will help you collect the correct data and allow you to implement the best regression equation.  

  • Tip 2: Always prefer Simple Models 

Start with a simple model and then make it more complicated only when needed. When you have several models with different predictive abilities, always prefer the simplest model because it will be more likely to be the best model. Another significant benefit of simpler models is that they are easier to understand and explain to others.  

  • Tip 3: Correlation Does Not Imply Causation  

Always remember correlation doesn't imply causation. Causation is a completely different thing as compared to causation. In general, to establish causation, you need to perform a designed experiment with randomization. However, If you’re using regression analysis to analyze the uncollected data in an experiment, causation is uncertain.

  • Tip 4: Include Graphs, Confidence, and Prediction Intervals in the Results   

The presentation of your results can influence the way people interpret them. For instance, confidence intervals and statistical significance provide consistent information.  According to a study, statistical reports that refer only to statistical significance only bring about correct interpretations 40% of the time. On the other hand, when the results also include confidence intervals, the percentage rises to 95%. 

  • Tip 5: Check the Residual Plots 

Residual plots are the quickest and easiest method to examine the problems in a regression model and allow you to make adjustments. For instance, residual plots help display patterns when you cannot model curvature present in your data. 

Regression Analysis and The Real World  

Let us summarize what we have covered in this article so far: 

  • Regression Analysis and its importance. 
  • Difference between regression and classification. 
  • Regression Line and Regression Equation. 
  • How companies use regression analysis 
  • When to use regression analysis. 
  • Assumptions in Regression Analysis. 
  • Simple and Multiple linear regression. 
  • R-squared: Representation, Interpretation, Calculation. 
  • Types of Regression. 
  • Terminologies used in Regression. 
  • How to avoid problems in regression. 

Regression Analysis is an interesting machine learning technique utilized extensively by enterprises to transform data into useful information. It continues to be a significant asset to many leading sectors starting from finance, education, banking, retail, medicine, media, etc.  

Profile

Priyankur Sarkar

Data Science Enthusiast

Priyankur Sarkar loves to play with data and get insightful results out of it, then turn those data insights and results in business growth. He is an electronics engineer with a versatile experience as an individual contributor and leading teams, and has actively worked towards building Machine Learning capabilities for organizations.