# Getting Started With Machine Learning With Python: Step by Step Guide

1K
• by Amit Diwan
• 05th Sep, 2020
• Last updated on 16th Mar, 2021

Takeaways from the article

• This article helps you understand the cases wherein Machine learning can be used, and where it is relevant (and where it is not).
• It discusses the basic steps involved in a machine learning problem, along with code in Python.
• It discusses how the data involved in a Machine Learning problem can be visualized using certain Python packages.

Machine Learning has remained a hot topic since many years. Many know how to make sense of it, and where it can actually be used. It is not a universal solution to all the challenging problems out there (that are difficult to be solved) in the universe. It can only be used when certain conditions are satisfied. Only then does a problem qualify to be solved using a Machine Learning algorithm. In general, Python is the most preferred language to work with algorithms that involve Machine Learning

## Introduction to Machine Learning

Machine Learningalso known as ML in short, is a sub-topic that falls under Artificial Intelligence (AI), to achieve specific goals. ML is the art of understanding or designing an algorithm that can be used to process large or small amounts of data. This algorithm will not explicitly define or set the rules for the machine to learn from the data. The machine learns from the data on its own. There are no ‘if’ or ‘else’ statements to guide the machine.

This is very much similar to how humans learn from their experiences in day-to-day life, how a child learns to ride a bike, how a child learns to read letters, then words, then sentences, and conversations.

## Getting started with Machine learning in Python

Python has been used to implement machine learning algorithms, since it is open-source, extremely popular and has gained immense support from the community as well. In addition to this, there are loads of packages in Python, and they support usage of machine learning algorithms for a variety of version of Python application.

These algorithms can be implemented in python by calling simple functions and these functions are placed inside classes. In turn, these classes are encapsulated in a module as a package.

The ‘scikit-learn’ package for Python is one of the most popular and has most of the machine learning algorithms pre-implemented, and housed inside packages. To implement an algorithm, the package can be imported (or a specific class from the package can be imported) and it can be bound with the variable or the class object using a dot operator and accessed. In general, to begin implementing any machine learning algorithm, the following steps can serve as a blue-print:

Define your problem, and confirm that it can be solved using machine learning (so that it is not a trivial “set of rules” related problem)

Prepare the data: In this step, the data needed for this model is collected from various resources. Another way is to generate data using the innumerable functions that are present in Python. In either case, the data has to be cleaned, structured, analysed, and the outliers have to be identified. Also, the data has to be pre-processed so that it is easy for the algorithm to build a model based on the data. Certain irrelevant columns maybe removed, and missing data should be handled.

The data needs to be trained and hyperparameters need to be tuned so as to get better prediction accuracy.

Note: It is understood that the users have Python 3.5 or a higher stable version installed on their workstations before beginning to execute the code in the upcoming sections. Other packages can be installed as and when required.

### Where Machine Learning can be used?

• The simplest place is when there is no prediction or complex data insight needed, it need not be used.
• Machine Learning algorithm are built by humans to help understand data better, make predictions etc. When we try to solve a problem, there are certain principles that we hold as a foundation (when dealing with physics- gravity, newton’s law) but algorithms don’t. They are stochastic (random) in nature.
• Not all problems that have a large amount of data is suited to work with Machine Learning algorithms. It is important to understand the deterministic nature of problems, and try to avoid solving such problems using Machine Learning.

## Machine Learning in Python

Let us jump into a simple problem of linear regression using Machine learning, Linear regression is a simple algorithm that predicts the value of a variable, based on certain other values. There are many variations to Linear Regression that includes Multi-variate regression, etc.

Before jumping into the algorithm, let us understand what linear regression means. ‘Linear’ basically means a straight line, and ‘regression’ which is a part of machine learning, talks about how tasks can be solved without explicitly being programmed.

There are various machine learning algorithms, and Linear Regression is just the beginning to it. This includes supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning.

### Why should Machine Learning be used?

Certain task needs intricate detailing, and patterns might not be fully unveiled if manual or simple methods are used to extract patterns. Machine learning, on the other hand, will be able to extract all important, hidden patterns, and work well even when the amount of data increases exponentially. It also becomes easy to improve pattern recognition. It will also be possible to deliver results in a time manner, get deeper and better insights into the data in hand.

The results computed using a Machine Learning algorithm would be more accurate in comparison to traditional methods, and the models build can serve as a foundation for other data as well. There are different classifications in machine learning, depending on various types. The 4 basic classifications are:

• Supervised learning algorithms
• Semi-supervised learning algorithms
• Unsupervised learning algorithms
• Reinforcement learning algorithms

Machine learning algorithms can also be classified based on how they learn- on the fly or incrementally, into 2 types:

• Online learning
• Batch learning

Machine learning algorithms can also be classified based on how they detect patterns- whether they detect patterns in data or compare new data values with previously seen data values:

• Model-based learning
• Instance-based learning

### Supervised Learning

• Most popular
• Easy to understand
• Easier to implement
• Gives decent results
• Expensive, since human intervention is required

Supervised learning involves human supervision. In real-time, supervision is present in the form of labelled features, feedback loop to the data (insights on whether the machine predicted correctly, and if not, what the correct prediction has to be) and so on.

Once the algorithm is trained on such data, it can predict good outputs with a high accuracy for never-before-seen inputs.

Applications of supervised learning:

• Spam classification: Classifying emails as spam or important.
• Face recognition: Detecting faces, mapping them to a specific face in a database of faces.

Supervised algorithms can further be classified into two types:

1. Classification algorithms: They classify the given data into one of the given classes or group of data. This basically deals with data grouping/data mapping into specific classes.
2. Regression algorithms: This deals with fitting the data to a given model, predicting continuous or discrete values.

### Semi-supervised Learning

• In between the supervised and unsupervised learning algorithms.
• Created to bridge the gap between dealing with fully structured and fully unstructured data.
• Comes between supervised and unsupervised algorithms.
• Input is a combination of unlabelled (more) and labelled (less) data.

Applications of semi-supervised learning algorithms:

• Speech analysis, sentiment analysis
• Content classification

### Unsupervised Learning

• No data labelling
• No human intervention
• May not be very accurate
• Can’t be applied to a broad variety of situations
• Algorithm has to figure out how and what to learn from the data
• Similar to real-world unstructured data
• Can’t be applied to a broad variety of situations

Applications of unsupervised learning:

• Clustering
• Anomaly detection

Unsupervised data can be classified into two categories:

• Clustering algorithms
• Association algorithms

### Reinforcement Learning

• It is a ‘punish and reward’ mechanism.
• Learns from surrounding and experience.
• An agent decides the next relevant step to arrive at the desired result.
• If algorithm learns correctly, then it is rewarded indicating that it is on the right path.
• If the algorithm made a mistake, it is punished to indicate the mistake and to learn from it.

Supervised learning algorithm is different from reinforcement, since the former has a comparable value, whereas the latter has to decide the next action and take it and bear the result and learn from it.

Applications of reinforcement learning:

• Robotics in automation
• Machine learning and data processing

Other types of learning algorithms

• Online learning
• Batch learning: It has two different categories: Model-based learning, and instance-based learning

### Online Learning

• Also known as incremental/out of the core learning.
• Assumption is that the learning environment changes constantly.

Machine learning models that are trained consistently and constantly on new data to predict output. On the other hand, during this period, the model is getting trained on new data in real time. Whenever the model sees a new example, it quickly has to learn from it and adapt to it. This way, even the newly learnt example will be a part of the trained model, and will be a part of giving the prediction/output.

### Batch Learning

This is also known as data learning in a group

Data is grouped/classified into different batches.

There batches are used to extract different patterns since every batch would be considerably different from the other one. These patterns are learned by the model in time

### Model-based learning

The specifications associated with a problem in a domain is converted into a model-format. When this model sees new data, it detects patterns from it, and these patterns are used to make predictions on the newly seen data.

### Instance-based learning

It is the simplest form of clustering and regression algorithms.

They either result in grouping the algorithm into different classes (due to classification) or give continuous or discrete values as output (due to linear or logistic regression).

Classification and regression is based on how similar or different the queries are, with respect to the values in the data.

### Linear Regression

In this algorithm, we will understand the problems with two different variables in hand- one is an independent variable, and the other one- a dependant variable. We will take a basic problem of finding prices of a house when its area is given. Assume that we have the below dataset:

Price of house (independent value)Area of the house (dependant value)
356500 sq m
5781000 sq m
8901500 sq m
13002000 sq m
18002500 sq m
?3000 sq m

When the above data is given, and the price of house is asked to be found (see last row), given the area of the house, simple linear regression (that gives a decent amount of accuracy) can be used. Below is how the data will look when plotted on a graph. It yields an almost straight line, which means the dependant value depends on the independent value, i.e the area of the house matters when the price of the house is being fixed.

The basic steps involved in a machine learning problem-

• Identify the problem: see if it qualifies to be solved using a Machine Learning algorithm.
• Gather the data: The data required can either be collected from a single source or various source, or it could be generated randomly (if it is for a specific purpose) using certain formulas and methods.
• Data cleaning: The data gathered may not be clean or structured, make sure it is cleaned, and in a structured or at least semi-structured format.
• Package installation: Install the packages that are required to work with the data.
• Data loading: Load the data into the Python environment using any IDE (Usually, Spyder is preferred)This is done so that the machine learning algorithm can access the data and perform the operations.
• Data cleaning: Data can be cleaned after it has been placed in the Python environment using certain packages and methods, or it can be cleaned before (manually or by applying some logic).
• Summarize the data: Understand the terms we are looking at, perform some operations on them, get the type of value, mean, median, variance, and standard deviation, which are insights into the data. This can be done easily by importing packages that have these functions.
• Data training: In this step, the input dataset is trained by passing it as parameter to the respective algorithm. This is done so that it can predict the output for the not-ever-seen data also known as testing dataset
• Linear Regression application: Apply the Linear Regression algorithm to this data.
• Data visualization: The data that has interacted with the linear regression algorithm is visualized using many Python packages.
• Prediction: The predictions are made with the help of the data trained, and are then displayed on the console. Code for Linear Regression using Python

Code to implement linear regression using Python

import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.linear_model import LinearRegression

#Random data set generated
np.random.seed(0)
x_dep = np.random.rand(100, 1)
y_indep = 5.89 + (2.45)* x_dep + np.random.rand(100, 1)

#The model is initialized using LinearRegression that is present in the scikit-learn package
model_of_regression = LinearRegression()

#The data is fit on the model, with the help of training
model_of_regression.fit(x_dep, y_indep)

#The output is predicted
predicted_y_val = model_of_regression.predict(x_dep)

#The model built is evaluated using mean squared error parameter
rmse = mean_squared_error(y_indep, predicted_y_val)

r2 = r2_score(y_indep, predicted_y_val)

print("The value of slope is: ", model_of_regression.coef_)
print("The intercept value is: ", model_of_regression.intercept_)
print("The Root Mean Squared Error value (RMSE) is: ", rmse)

#The data is visualized usign the matplotlib library
plt.scatter(x_dep, y_indep, s=8)
plt.xlabel('X-axis')
plt.ylabel('Y-axis')

#The values are predicted and plotted on a graph and displayed on the screen
plt.plot(x_dep, predicted_y_val, color='r')
plt.show() 

Output:

Code review-Explanation of every step

• The required packages are imported using the ‘import’ keyword.
• Make sure that ‘scikit-learn’ package is installed before working on this code.
• Instead of using precooked data, we are generating data here, using the ‘random’ function.
• A seed is defined, and a formula is created that assumes random values for variables and generates random data.
• The ‘LinearRegression’ function, present in the scikit-learn package is initiated so as to create a model, and one of the functions inside the LinearRegression package-namely ‘fit’ is called by passing the dependant and the independent values.
• The ‘predict’ function from the LinearRegression is used to predict the value that is not known for a given independent value.
• After the model is built with the data, it is important to see how it has fared.
• Hence, an attribute named RMSE (Root Mean Squared Error) is used to see the difference between the value that had to actually be predicted and the value that was predicted.
• Next, the data is visualized on the screen using a package named ‘matplotlib’.

Conclusion

In all, Machine Learning is a game changer when it comes to identifying its use cases, and applying the right kind of algorithm in the right place, with the right amount of data, and right computational resources and power. Linear Regression is just a simple algorithm of where Machine Learning begins to show its aspects. Usually, the Python language is used to implement Machine Learning algorithms, but other new languages could also be used.

### Amit Diwan

Author

Amit Diwan is an E-Learning Entrepreneur, who has taught more than a million professionals with Text & Video Courses on the following technologies: Data Science, AI, ML, C#, Java, Python, Android, WordPress, Drupal, Magento, Bootstrap 4, etc.

## Machine Learning Model Evaluation

9289
Machine Learning Model Evaluation

If we were to list the technologies that have revo... Read More

## What is Linear Regression in Machine Learning

Machine Learning, being a subset of Artificial Intelligence (AI), has been playing a dominant role in our daily lives. Data science engineers and developers working in various domains are widely using machine learning algorithms to make their tasks simpler and life easier. For example, certain machine learning algorithms enable Google Maps to find the fastest route to our destinations, allow Tesla to make driverless cars, help Amazon to generate almost 35% of their annual income, AccuWeather to get the weather forecast of 3.5 million locations weeks in advance, Facebook to automatically detect faces and suggest tags and so on.In statistics and machine learning, linear regression is one of the most popular and well understood algorithms. Most data science enthusiasts and machine learning  fanatics begin their journey with linear regression algorithms. In this article, we will look into how linear regression algorithm works and how it can be efficiently used in your machine learning projects to build better models.Linear Regression is one of the machine learning algorithms where the result is predicted by the use of known parameters which are correlated with the output. It is used to predict values within a continuous range rather than trying to classify them into categories. The known parameters are used to make a continuous and constant slope which is used to predict the unknown or the result.What is a Regression Problem?Majority of the machine learning algorithms fall under the supervised learning category. It is the process where an algorithm is used to predict a result based on the previously entered values and the results generated from them. Suppose we have an input variable ‘x’ and an output variable ‘y’ where y is a function of x (y=f{x}). Supervised learning reads the value of entered variable ‘x’ and the resulting variable ‘y’ so that it can use those results to later predict a highly accurate output data of ‘y’ from the entered value of ‘x’. A regression problem is when the resulting variable contains a real or a continuous value. It tries to draw the line of best fit from the data gathered from a number of points.For example, which of these is a regression problem?How much gas will I spend if I drive for 100 miles?What is the nationality of a person?What is the age of a person?Which is the closest planet to the Sun?Predicting the amount of gas to be spent and the age of a person are regression problems. Predicting nationality is categorical and the closest planet to the Sun is discrete.What is Linear Regression?Let’s say we have a dataset which contains information about the relationship between ‘number of hours studied’ and ‘marks obtained’. A number of students have been observed and their hours of study along with their grades are recorded. This will be our training data. Our goal is to design a model that can predict the marks if number of hours studied is provided. Using the training data, a regression line is obtained which will give minimum error. This linear equation is then used to apply for a new data. That is, if we give the number of hours studied by a student as an input, our model should be able to predict their mark with minimum error.Hypothesis of Linear RegressionThe linear regression model can be represented by the following equation:where,Y is the predicted valueθ₀ is the bias term.θ₁,…,θn are the model parametersx₁, x₂,…,xn are the feature values.The above hypothesis can also be represented byWhere, θ is the model’s parameter vector including the bias term θ₀; x is the feature vector with x₀ =1Y (pred) = b0 + b1*xThe values b0 and b1 must be chosen so that the error is minimum. If sum of squared error is taken as a metric to evaluate the model, then the goal is to obtain a line that best reduces the error.If we don’t square the error, then the positive and negative points will cancel each other out.For a model with one predictor,Exploring ‘b1’If b1 > 0, then x (predictor) and y(target) have a positive relationship. That is an increase in x will increase y.If b1 < 0, then x (predictor) and y(target) have a negative relationship. That is an increase in x will decrease y.Exploring ‘b0’If the model does not include x=0, then the prediction will become meaningless with only b0. For example, we have a dataset that relates height(x) and weight(y). Taking x=0 (that is height as 0), will make the equation have only b0 value which is completely meaningless as in real-time height and weight can never be zero. This resulted due to considering the model values beyond its scope.If the model includes value 0, then ‘b0’ will be the average of all predicted values when x=0. But, setting zero for all the predictor variables is often impossible.The value of b0 guarantees that the residual will have mean zero. If there is no ‘b0’ term, then the regression will be forced to pass over the origin. Both the regression coefficient and prediction will be biased.How does Linear Regression work?Let’s look at a scenario where linear regression might be useful: losing weight. Let us consider that there’s a connection between how many calories you take in and how much you weigh; regression analysis can help you understand that connection. Regression analysis will provide you with a relation which can be visualized into a graph in order to make predictions about your data. For example, if you’ve been putting on weight over the last few years, it can predict how much you’ll weigh in the next ten years if you continue to consume the same amount of calories and burn them at the same rate.The goal of regression analysis is to create a trend line based on the data you have gathered. This then allows you to determine whether other factors apart from the amount of calories consumed affect your weight, such as the number of hours you sleep, work pressure, level of stress, type of exercises you do etc. Before taking into account, we need to look at these factors and attributes and determine whether there is a correlation between them. Linear Regression can then be used to draw a trend line which can then be used to confirm or deny the relationship between attributes. If the test is done over a long time duration, extensive data can be collected and the result can be evaluated more accurately. By the end of this article we will build a model which looks like the below picture i.e, determine a line which best fits the data.How do we determine the best fit line?The best fit line is considered to be the line for which the error between the predicted values and the observed values is minimum. It is also called the regression line and the errors are also known as residuals. The figure shown below shows the residuals. It can be visualized by the vertical lines from the observed data value to the regression line.When to use Linear Regression?Linear Regression’s power lies in its simplicity, which means that it can be used to solve problems across various fields. At first, the data collected from the observations need to be collected and plotted along a line. If the difference between the predicted value and the result is almost the same, we can use linear regression for the problem.Assumptions in linear regressionIf you are planning to use linear regression for your problem then there are some assumptions you need to consider:The relation between the dependent and independent variables should be almost linear.The data is homoscedastic, meaning the variance between the results should not be too much.The results obtained from an observation should not be influenced by the results obtained from the previous observation.The residuals should be normally distributed. This assumption means that the probability density function of the residual values is normally distributed at each independent value.You can determine whether your data meets these conditions by plotting it and then doing a bit of digging into its structure.Few properties of Regression LineHere are a few features a regression line has:Regression passes through the mean of independent variable (x) as well as mean of the dependent variable (y).Regression line minimizes the sum of “Square of Residuals”. That’s why the method of Linear Regression is known as “Ordinary Least Square (OLS)”. We will discuss more in detail about Ordinary Least Square later on.B1 explains the change in Y with a change in x  by one unit. In other words, if we increase the value of ‘x’ it will result in a change in value of Y.Finding a Linear Regression lineLet’s say we want to predict ‘y’ from ‘x’ given in the following table and assume they are correlated as “y=B0+B1∗x”xyPredicted 'y'12Β0+B1∗121Β0+B1∗233Β0+B1∗346Β0+B1∗459Β0+B1∗5611Β0+B1∗6713Β0+B1∗7815Β0+B1∗8917Β0+B1∗91020Β0+B1∗10where,Std. Dev. of x3.02765Std. Dev. of y6.617317Mean of x5.5Mean of y9.7Correlation between x & y0.989938If the Residual Sum of Square (RSS) is differentiated with respect to B0 & B1 and the results equated to zero, we get the following equation:B1 = Correlation * (Std. Dev. of y/ Std. Dev. of x)B0 = Mean(Y) – B1 * Mean(X)Putting values from table 1 into the above equations,B1 = 2.64B0 = -2.2Hence, the least regression equation will become –Y = -2.2 + 2.64*xxY - ActualY - Predicted120.44213.08335.72468.36591161113.6471316.2881518.9291721.56102024.2As there are only 10 data points, the results are not too accurate but if we see the correlation between the predicted and actual line, it has turned out to be very high; both the lines are moving almost together and here is the graph for visualizing our predicted values:Model PerformanceAfter the model is built, if we see that the difference in the values of the predicted and actual data is not much, it is considered to be a good model and can be used to make future predictions. The amount that we consider “not much” entirely depends on the task you want to perform and to what percentage the variation in data can be handled. Here are a few metric tools we can use to calculate error in the model-R – Square (R2)Total Sum of Squares (TSS): total sum of squares (TSS) is a quantity that appears as part of a standard way of presenting results of such an analysis. Sum of squares is a measure of how a data set varies around a central number (like the mean). The Total Sum of Squares tells how much variation there is in the dependent variable.TSS = Σ (Y – Mean[Y])2Residual Sum of Squares (RSS): The residual sum of squares tells you how much of the dependent variable’s variation your model did not explain. It is the sum of the squared differences between the actual Y and the predicted Y.RSS = Σ (Y – f[Y])2(TSS – RSS) measures the amount of variability in the response that is explained by performing the regression.Properties of R2R2 always ranges between 0 to 1.R2 of 0 means that there is no correlation between the dependent and the independent variable.R2 of 1 means the dependent variable can be predicted from the independent variable without any error. An R2 between 0 and 1 indicates the extent to which the dependent variable is predictable. An R2 of 0.20 means that there is 20% of the variance in Y is predictable from X; an R2 of 0.40 means that 40% is predictable; and so on.Root Mean Square Error (RMSE)Root Mean Square Error (RMSE) is the standard deviation of the residuals (prediction errors). The formula for calculating RMSE is:Where N : Total number of observationsWhen standardized observations are used as RMSE inputs, there is a direct relationship with the correlation coefficient. For example, if the correlation coefficient is 1, the RMSE will be 0, because all of the points lie on the regression line (and therefore there are no errors).Mean Absolute Percentage Error (MAPE)There are certain limitations to the use of RMSE, so analysts prefer MAPE over RMSE which gives error in terms of percentages so that different models can be considered for the task and see how they perform. Formula for calculating MAPE can be written as:Where N : Total number of observationsFeature SelectionFeature selection is the automatic selection of attributes for your data that are most relevant to the predictive model you are working on. It seeks to reduce the number of attributes in the dataset by eliminating the features which are not required for the model construction. Feature selection does not totally eliminate an attribute which is considered for the model, rather it mutes that particular characteristic and works with the features which affects the model.Feature selection method aids your mission to create an accurate predictive model. It helps you by choosing features that will give you as good or better accuracy whilst requiring less data. Feature selection methods can be used to identify and remove unnecessary, irrelevant and redundant attributes from the data that do not contribute to the accuracy of the model or may even decrease the accuracy of the model. Having fewer attributes is desirable because it reduces the complexity of the model, and a simpler model is easier to understand, explain and to work with.Feature Selection Algorithms:Filter Method: This method involves assigning scores to individual features and ranking them. The features that have very little to almost no impact are removed from consideration while constructing the model.Wrapper Method: Wrapper method is quite similar to Filter method except the fact that it considers attributes in a group i.e. a number of attributes are taken and checked whether they are having an impact on the model and if not another combination is applied.Embedded Method: Embedded method is the best and most accurate of all the algorithms. It learns the features that affect the model while the model is being constructed and takes into consideration only those features. The most common type of embedded feature selection methods are regularization methods.Cost FunctionCost function helps to figure out the best possible plots which can be used to draw the line of best fit for the data points. As we want to reduce the error of the resulting value we change the process of finding out the actual result to a process which can reduce the error between the predicted value and the actual value.Here, J is the cost function.The above function is made in this format to calculate the error difference between the predicted values and the plotted values. We take the square of the summation of all the data points and divide it by the total number of data points. This cost function J is also called the Mean Squared Error (MSE) function. Using this MSE function we are going to predict values such that the MSE value settles at the minima, reducing the cost function.Gradient DescentGradient Descent is an optimization algorithm that helps machine learning models to find out paths to a minimum value using repeated steps. Gradient descent is used to minimize a function so that it gives the lowest output of that function. This function is called the Loss Function. The loss function shows us how much error is produced by the machine learning model compared to actual results. Our aim should be to lower the cost function as much as possible. One way of achieving a low cost function is by the process of gradient descent. Complexity of some equations makes it difficult to use, partial derivative of the cost function with respect to the considered parameter can provide optimal coefficient value. You may refer to the article on Gradient Descent for Machine Learning.Simple Linear RegressionOptimization is a big part of machine learning and almost every machine learning algorithm has an optimization technique at its core for increased efficiency. Gradient Descent is such an optimization algorithm used to find values of coefficients of a function that minimizes the cost function. Gradient Descent is best applied when the solution cannot be obtained by analytical methods (linear algebra) and must be obtained by an optimization technique.Residual Analysis: Simple linear regression models the relationship between the magnitude of one variable and that of a second—for example, as x increases, y also increases. Or as x increases, y decreases. Correlation is another way to measure how two variables are related. The models done by simple linear regression estimate or try to predict the actual result but most often they deviate from the actual result. Residual analysis is used to calculate by how much the estimated value has deviated from the actual result.Null Hypothesis and p-value: During feature selection, null hypothesis is used to find which attributes will not affect the result of the model. Hypothesis tests are used to test the validity of a claim that is made about a particular attribute of the model. This claim that’s on trial, in essence, is called the null hypothesis. A p-value helps to determine the significance of the results. p-value is a number between 0 and 1 and is interpreted in the following way:A small p-value (less than 0.05) indicates a strong evidence against the null hypothesis, so the null hypothesis is to be rejected.A large p-value (greater than 0.05) indicates weak evidence against the null hypothesis, so the null hypothesis is to be considered.p-value very close to the cut-off (equal to 0.05) is considered to be marginal (could go either way). In this case, the p-value should be provided to the readers so that they can draw their own conclusions.Ordinary Least SquareOrdinary Least Squares (OLS), also known as Ordinary least squares regression or least squared errors regression is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters for a linear function, the goal of which is to minimize the sum of the squares of the difference of the observed variables and the dependent variables i.e. it tries to attain a relationship between them. There are two types of relationships that may occur: linear and curvilinear. A linear relationship is a straight line that is drawn through the central tendency of the points; whereas a curvilinear relationship is a curved line. Association between the variables are depicted by using a scatter plot. The relationship could be positive or negative, and result variation also differs in strength.The advantage of using Ordinary Least Squares regression is that it can be easily interpreted and is highly compatible with recent computers’ built-in algorithms from linear algebra. It can be used to apply to problems with lots of independent variables which can efficiently conveyed to thousands of data points. In Linear Regression, OLS is used to estimate the unknown parameters by creating a model which will minimize the sum of the squared errors between the observed data and the predicted one.Let us simulate some data and look at how the predicted values (Yₑ) differ from the actual value (Y):import pandas as pd import numpy as np from matplotlib import pyplot as plt # Generate 'random' data np.random.seed(0) X = 2.5 * np.random.randn(100) + 1.5   # Array of 100 values with mean = 1.5, stddev = 2.5 res = 0.5 * np.random.randn(100)         # Generate 100 residual terms y = 2 + 0.3 * X + res                   # Actual values of Y # Create pandas dataframe to store our X and y values df = pd.DataFrame(     {'X': X,       'y': y} ) # Show the first five rows of our dataframe df.head()XY05.9101314.71461512.5003932.07623823.9468452.54881137.1022334.61536846.1688953.264107To estimate y using the OLS method, we need to calculate xmean and ymean, the covariance of X and y (xycov), and the variance of X (xvar) before we can determine the values for alpha and beta.# Calculate the mean of X and y xmean = np.mean(X) ymean = np.mean(y) # Calculate the terms needed for the numator and denominator of beta df['xycov'] = (df['X'] - xmean) * (df['y'] - ymean) df['xvar'] = (df['X'] - xmean)**2 # Calculate beta and alpha beta = df['xycov'].sum() / df['xvar'].sum() alpha = ymean - (beta * xmean) print(f'alpha = {alpha}') print(f'beta = {beta}')alpha = 2.0031670124623426 beta = 0.3229396867092763Now that we have an estimate for alpha and beta, we can write our model as Yₑ = 2.003 + 0.323 X, and make predictions:ypred = alpha + beta * XLet’s plot our prediction ypred against the actual values of y, to get a better visual understanding of our model.# Plot regression against actual data plt.figure(figsize=(12, 6)) plt.plot(X, ypred) # regression line plt.plot(X, y, 'ro')   # scatter plot showing actual data plt.title('Actual vs Predicted') plt.xlabel('X') plt.ylabel('y') plt.show()The blue line in the above graph is our line of best fit, Yₑ = 2.003 + 0.323 X.  If you observe the graph carefully, you will notice that there is a linear relationship between X and Y. Using this model, we can predict Y from any values of X. For example, for X = 8,Yₑ = 2.003 + 0.323 (8) = 4.587RegularizationRegularization is a type of regression that is used to decrease the coefficient estimates down to zero. This helps to eliminate the data points that don’t actually represent the true properties of the model, but have appeared by random chance. The process is done by identifying the points which have deviated from the line of best-fit by a large extent. Earlier we saw that to estimate the regression coefficients β in the least squares method, we must minimize the term Residual Sum of Squares (RSS). Let the RSS equation in this case be:The general linear regression model can be expressed using a condensed formula:Here, β=[β0 ,β1, ….. βp]The RSS value will adjust the coefficient, β based on the training data. If the resulting data deviates too much from the training data, then the estimated coefficients won’t generalize well to the future data. This is where regularization comes in and shrinks or regularizes these learned estimates towards zero.Ridge regressionRidge regression is very similar to least squares, except that the Ridge coefficients are estimated by minimizing a different quantity. In particular, the Ridge regression coefficients β are the values that minimize the following quantity:Here, λ is the tuning parameter that decides how much we want to penalize the flexibility of the model. λ controls the relative impact of the two components: RSS and the penalty term. If λ = 0, the Ridge regression will produce a result similar to least squares method. If λ → ∞, all estimated coefficients tend to zero. Ridge regression produces different estimates for different values of λ. The optimal choice of λ is crucial and should be done with cross-validation. The coefficient estimates produced by ridge regression method is also known as the L2 norm.The coefficients generated by Ordinary Least Squares method is independent of scale, which means that if each input variable is multiplied by a constant, the corresponding coefficient will be divided by the same constant, as a result of which the multiplication of the coefficient and the input variables will remain the same. The same is not true for ridge regression and we need to bring the coefficients to the same scale before we perform the process. To standardize the variables, we must subtract their means and divide it by their standard deviations.Lasso RegressionLeast Absolute Shrinkage and Selection Operator (LASSO) regression also shrinks the coefficients by adding a penalty to the sum of squares of the residuals, but the lasso penalty has a slightly different effect. The lasso penalty is the sum of the absolute values of the coefficient vector, which corresponds to its L1 norm. Hence, the lasso estimate is defined by:Similar to ridge regression, the input variables need to be standardized. The lasso penalty makes the solution nonlinear, and there is no closed-form expression for the coefficients as in ridge regression. Instead, the lasso solution is a quadratic programming problem and there are available efficient algorithms that compute the entire path of coefficients that result for different values of λ with the same computational cost as for ridge regression.The lasso penalty had the effect of gradually reducing some coefficients to zero as the regularization increases. For this reason, the lasso can be used for the continuous selection of a subset of features.Linear Regression with multiple variablesLinear regression with multiple variables is also known as "multivariate linear regression". We now introduce notation for equations where we can have any number of input variables.x(i)j=value of feature j in the ith training examplex(i)=the input (features) of the ith training examplem=the number of training examplesn=the number of featuresThe multivariable form of the hypothesis function accommodating these multiple features is as follows:hθ(x)=θ0+θ1x1+θ2x2+θ3x3+⋯+θnxnIn order to develop intuition about this function, we can think about θ0 as the basic price of a house, θ1 as the price per square meter, θ2 as the price per floor, etc. x1 will be the number of square meters in the house, x2 the number of floors, etc.Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more.Remark: Note that for convenience reasons in this course we assume x0 (i) =1 for (i∈1,…,m). This allows us to do matrix operations with θ and x. Hence making the two vectors ‘θ’and x(i) match each other element-wise (that is, have the same number of elements: n+1).Multiple Linear RegressionHow is it different?In simple linear regression we use a single independent variable to predict the value of a dependent variable whereas in multiple linear regression two or more independent variables are used to predict the value of a dependent variable. The difference between the two is the number of independent variables. In both cases there is only a single dependent variable.MulticollinearityMulticollinearity tells us the strength of the relationship between independent variables. Multicollinearity is a state of very high intercorrelations or inter-associations among the independent variables. It is therefore a type of disturbance in the data, and if present in the data the statistical inferences made about the data may not be reliable. VIF (Variance Inflation Factor) is used to identify the Multicollinearity. If VIF value is greater than 4, we exclude that variable from our model.There are certain reasons why multicollinearity occurs:It is caused by an inaccurate use of dummy variables.It is caused by the inclusion of a variable which is computed from other variables in the data set.Multicollinearity can also result from the repetition of the same kind of variable.Generally occurs when the variables are highly correlated to each other.Multicollinearity can result in several problems. These problems are as follows:The partial regression coefficient due to multicollinearity may not be estimated precisely. The standard errors are likely to be high.Multicollinearity results in a change in the signs as well as in the magnitudes of the partial regression coefficients from one sample to another sample.Multicollinearity makes it tedious to assess the relative importance of the independent variables in explaining the variation caused by the dependent variable.Iterative ModelsModels should be tested and upgraded again and again for better performance. Multiple iterations allows the model to learn from its previous result and take that into consideration while performing the task again.Making predictions with Linear RegressionLinear Regression can be used to predict the value of an unknown variable using a known variable by the help of a straight line (also called the regression line). The prediction can only be made if it is found that there is a significant correlation between the known and the unknown variable through both a correlation coefficient and a scatterplot.The general procedure for using regression to make good predictions is the following:Research the subject-area so that the model can be built based on the results produced by similar models. This research helps with the subsequent steps.Collect data for appropriate variables which have some correlation with the model.Specify and assess the regression model.Run repeated tests so that the model has more data to work with.To test if the model is good enough observe whether:The scatter plot forms a linear pattern.The correlation coefficient r, has a value above 0.5 or below -0.5. A positive value indicates a positive relationship and a negative value represents a negative relationship.If the correlation coefficient shows a strong relationship between variables but the scatter plot is not linear, the results can be misleading. Examples on how to use linear regression have been shown earlier.Data preparation for Linear RegressionStep 1: Linear AssumptionThe first step for data preparation is checking for the variables which have some sort of linear correlation between the dependent and the independent variables.Step 2: Remove NoiseIt is the process of reducing the number of attributes in the dataset by eliminating the features which have very little to no requirement for the construction of the model.Step 3: Remove CollinearityCollinearity tells us the strength of the relationship between independent variables. If two or more variables are highly collinear, it would not make sense to keep both the variables while evaluating the model and hence we can keep one of them.Step 4: Gaussian DistributionsThe linear regression model will produce more reliable results if the input and output variables have a Gaussian distribution. The Gaussian theorem states that  states that a sample mean from an infinite population is approximately normal, or Gaussian, with mean the same as the underlying population, and variance equal to the population variance divided by the sample size. The approximation improves as the sample size gets large.Step 5: Rescale InputsLinear regression model will produce more reliable predictions if the input variables are rescaled using standardization or normalization.Linear Regression with statsmodelsWe have already discussed OLS method, now we will move on and see how to use the OLS method in the statsmodels library. For this we will be using the popular advertising dataset. Here, we will only be looking at the TV variable and explore whether spending on TV advertising can predict the number of sales for the product. Let’s start by importing this csv file as a pandas dataframe using read_csv():# Import and display first five rows of advertising dataset advert = pd.read_csv('advertising.csv') advert.head()TVRadioNewspaperSales0230.137.869.222.1144.539.345.110.4217.245.969.312.03151.541.358.516.54180.810.858.417.9Now we will use statsmodels’ OLS function to initialize simple linear regression model. It will take the formula y ~ X, where X is the predictor variable (TV advertising costs) and y is the output variable (Sales). Then, we will fit the model by calling the OLS object’s fit() method.import statsmodels.formula.api as smf # Initialise and fit linear regression model using statsmodels model = smf.ols('Sales ~ TV', data=advert) model = model.fit()Once we have fit the simple regression model, we can predict the values of sales based on the equation we just derived using the .predict method and also visualise our regression model by plotting sales_pred against the TV advertising costs to find the line of best fit.# Predict values sales_pred = model.predict() # Plot regression against actual data plt.figure(figsize=(12, 6)) plt.plot(advert['TV'], advert['Sales'], 'o')       # scatter plot showing actual data plt.plot(advert['TV'], sales_pred, 'r', linewidth=2)   # regression line plt.xlabel('TV Advertising Costs') plt.ylabel('Sales') plt.title('TV vs Sales') plt.show()In the above graph, if you notice you will see that there is a positive linear relationship between TV advertising costs and Sales. You may also summarize by saying that spending more on TV advertising predicts a higher number of sales.Linear Regression with scikit-learnLet us learn to implement linear regression models using sklearn. For this model as well, we will continue to use the advertising dataset but now we will use two predictor variables to create a multiple linear regression model. Yₑ = α + β₁X₁ + β₂X₂ + … + βₚXₚ, where p is the number of predictors.In our example, we will be predicting Sales using the variables TV and Radio i.e. our model can be written as:Sales = α + β₁*TV + β₂*Radiofrom sklearn.linear_model import LinearRegression # Build linear regression model using TV and Radio as predictors # Split data into predictors X and output Y predictors = ['TV', 'Radio'] X = advert[predictors] y = advert['Sales'] # Initialise and fit model lm = LinearRegression() model = lm.fit(X, y) print(f'alpha = {model.intercept_}') print(f'betas = {model.coef_}')alpha = 4.630879464097768 betas = [0.05444896 0.10717457]model.predict(X)Now that we have fit a multiple linear regression model to our data, we can predict sales from any combination of TV and Radio advertising costs. For example, you want to know how many sales we would make if we invested $600 in TV advertising and$300 in Radio advertising. You can simply find it out by:new_X = [[600, 300]] print(model.predict(new_X))[69.4526273]We get the output as 69.45 which means if we invest $600 on TV and$300 on Radio advertising, we can expect to sell 69 units approximately.SummaryLet us sum up what we have covered in this article so far —How to understand a regression problemWhat is linear regression and how it worksOrdinary Least Square method and RegularizationImplementing Linear Regression in Python using statsmodel and sklearn libraryWe have discussed about a couple of ways to implement linear regression and build efficient models for certain business problems. If you are inspired by the opportunities provided by machine learning, enrol in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape.
8573
What is Linear Regression in Machine Learning

Machine Learning, being a subset of Artificial Int... Read More