## The K-Fold Cross Validation in Machine Learning

Rated 4.0/5 based on 11 customer reviews

# The K-Fold Cross Validation in Machine Learning

7K

Takeaways from the article

• This article will cover one of the most important concepts - the ‘k’ fold cross validation in Machine Learning.
• This article discusses how cross validation works, and why it is important, and how ‘underfitting’ or ‘overfitting’ or 'just the right fit’ affects the output data.
• Along with overfitting, we will discuss what is Hyperparameterand how is a hyperparameter for a given model decided and how we can check the value of a hyperparameter.
• This article also covers applications of ‘k’ fold cross validations, how to choose the value of ‘k’ and its applications. The concept of variations in cross-validation is also covered.
• We will also discuss the implementation in Python along with code explanation.

## Introduction

Machine learning algorithms, apart from many uses, are also used to extract patterns from data or predict certain continuous or discrete values. It is also important to understand that the model that is built with respect to the data is just right - it doesn’t overfit or underfit. ‘Overfitting’ and ‘underfitting’ are two concepts in Machine Learning that deal with how well the data has been trained and how accurately the data has been predicted. The Over fitting includes a value Hyperparameter, to see how the algorithm behaves.

Underfitting in Machine Learning

Given a dataset, and an appropriate algorithm to train the dataset with, if the model fits the dataset rightly, it will be able to give accurate predictions on never-seen-before data.

On the other hand, if the Machine Learning model hasn’t been trained properly on the given data due to various reasons, the model will not be able to make accurate or nearly good predictions on the data.

This is because the model would have failed to capture the essential patterns from the data.

If a model that is being trained is stopped prematurely, it can lead to underfitting. This means data won’t be trained for the right amount of time, due to which it won’t be able to perform well with the new data. This would lead to the model not being able to give good results, and they could not be relied upon.

The dashed line in blue is the model that underfits the data. The black parabola is the line of data points that fits the model well.

## Overfitting in Machine Learning

This is just the opposite of underfitting. It means that instead of extracting the patterns or learning the data just right, it learns too much. This means all the data is basically captured, including noise (irrelevant data, that wouldn’t contribute to the prediction of output when new data is encountered) thereby leading to not being able to generalize the model to new data.The model, during training, performs well, and learns all data points, literally memorizing the data that it has been given. But when it is in the testing phase or a new data point is introduced to it, it fails miserably. The new data point will not be captured by an overfit machine learning model.

Note: In general, more the data, better the training, leading to better prediction results. But it should also be made sure that the model is not just capturing all points, instead it is learning, thereby removing the noise present in the data.

Before exposing the model to the real world, the training data is divided into two parts. One is called the ‘training set’ and the other is known as the ‘test set’. Once the training is completed on the training dataset, the test set is exposed to the model to see how it behaves with newly encountered data. This gives a sufficient idea about how accurately the model can work with new data, and its accuracy.

### Hyperparameter

Hyperparameters are values that are used in Machine Learning, set by the user by performing a few trails, to see how the algorithm behaves. Some examples include ridge regression and gradient descent.

### How is a hyperparameter for a given model decided?

Hyperparameters depend on various factors like the algorithm in hand, the data provided, and so on. The optimal value of hyperparameter can be found only through trial and error method. This method is known as hyperparameter tuning.

To check the value of the hyperparameter, and to tune the hyperparameter, the test set (the data set which is used to see how the model works on new data) is constantly used, thereby making the model develop an affinity to the test data set. When this happens, the test set almost becomes the training set, and the test data set can’t be used to see how well the model generalizes to new data.

To overcome this situation, the original dataset is split into 3 different sets- ‘training dataset’, ‘validation dataset’, and ‘test dataset’.

• Training dataset: This is used as a parameter to the given machine learning algorithm, to be trained upon.
• Validation dataset: This dataset is used to evaluate the model, i.e hyperparameter tuning. Later the result is checked, and if the results are not appropriate, the hyperparameter value can be changed and it can be tested on the validation set. This way, the model would not be exposed to the test set, thereby preserving the sanctity of the model’s ability.
• Testing dataset: This dataset is used to see how the model performs on new data.

### Disadvantages of randomly dividing the dataset into three different parts:

• Some parts of the dataset may have a large number of a specific type of data.  This way, during training, essential patterns may be missed out.
• The number of samples in the training set will reduce since the data will be divided into three parts.

The solution to the above issues is to use cross-validation.

## Cross-validation

It is a process in which the original dataset is divided is divided into two parts only- the ‘training dataset’ and the ‘testing dataset’.

The need of a ‘validation dataset’ is eliminated when cross-validation comes into the picture.

There are many variations of the ‘cross-validation’ method, and the most commonly used one is known as ‘k’ fold cross-validation.

### Steps in ‘k’ fold cross-validation

• In this method, the training dataset will be split into multiple ‘k’ smaller parts/sets. Hence the name ‘k’-fold.
• The current training dataset would now be divided into ‘k’ parts, out of which one dataset is left out and the remaining ‘k-1’ datasets are used to train the model.
• This is done multiple number of times. The number of times that it has to be done is mentioned by the user in the code.
• The one that was kept out of the training is used as a ‘validation dataset’. This can be used to tune hyperparameters and see how the model performs and change the values accordingly, to yield better results.
• Even though the size of the dataset isn’t reduced considerably, it was reduced to a certain extent. This method also makes sure that the model remains robust and generalizes well on the data.

Steps in ‘k’ fold cross-validation

The above image can be used as a representation of cross validation. Once the part of the training set is checked to find the best hyperparameter, and the best hyperparameter/s are found, this new data is again sent to the model to be retrained. The model will also have the knowledge of the old training data, and along with it, it may give better results, and it can be tested by seeing how new data performs on the testing set of this model.

### How is the value of ‘k’ decided?

This depends on the data in hand. It is a trial and error method in which the value is chosen. Usually it is taken as 10 which is completely arbitrary. A large value for ‘k’ indicates less bias, and high variance. Also, this means more data samples can be used to give a better, and precise outcomes.

### Code for ‘k’ fold cross-validation

Data required to understand ‘k’ fold cross validation can be  taken/copied from the below location:

This data can be pasted into a CSV file and the below code can be executed. Make sure to give heading to all the columns.

from sklearn.model_selection import KFold
from sklearn.preprocessing import MinMaxScaler
from sklearn.svm import SVR
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import cross_val_score

import numpy as np
import pandas as pd

X = dataset.iloc[:, [0, 12]]
y = dataset.iloc[:, 13]
scaler = MinMaxScaler(feature_range=(0, 1))
X = scaler.fit_transform(X)
my_scores = []
best_svr = SVR(kernel='rbf')
cv = KFold(n_splits=10, random_state=42, shuffle=False)

for train_index, test_index in cv.split(X):
print("Training data index: ", train_index, "\n")
print("Test data index: ", test_index)

X_train, X_test, y_train, y_test = X[train_index], X[test_index], y[train_index], y[test_index]
best_svr.fit(X_train, y_train)
my_scores.append(best_svr.score(X_test, y_test))
best_svr.fit(X_train, y_train)
my_scores.append(best_svr.score(X_test, y_test))
print("The mean value is" )
print(np.mean(my_scores))
#or
cross_val_score(best_svr, X, y, cv=10)
#(or)
cross_val_predict(best_svr, X, y, cv=10)

Output:

Training data index:  [  0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17
18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35
36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53
54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71
72  73  74  75  76  77  78  79  80  81  82  83  84 170 171 172 173 174
175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192
193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210
211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228
229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246
247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264
265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282
283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300
301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318
319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336
337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354
355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372
373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390
391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408
409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426
427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444
445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462
463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480
481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498
499 500 501 502 503 504 505]

Test data index:  [ 85  86  87  88  89  90  91  92  93  94  95  96  97  98  99 100 101 102
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138
139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156
157 158 159 160 161 162 163 164 165 166 167 168 169]

The mean value is
0.28180923811255787 

Note:

• This is just one split, and this output is repeated as many number of times as we have mentioned in the ‘n_splits’.
• The ‘KFold’ function returns the index of the data points. Hence, if a user wishes to see the values placed in those indices, they have to be appropriately accessed.

Instead of using ‘mean’ to find r-squared value, the ‘cross_val_predict’ or ‘cross_val_score’ present in the scikit-learn package (model_selection) can also be used. The ‘cross_val_predict’ will give the predictions from the dataset when every split is made during the training. On the other hand, the ‘cross_val_score’ gives the r-squared value using cross-validation.

Code explanation

• The packages that are necessary to work with ‘k’ fold cross validation are imported using the ‘import’ keyword.
• The data in the form of a csv file needs to be brought into the Python environment.
• Hence, the ‘read_csv’ function, present in the ‘pandas’ package is used to read the CSV file and convert it into a dataframe (pandas data structure).
• Then, certain columns are assigned to variables ‘X’ and ‘y’ respectively, and the MinMaxScaler function is used to fit the data to the model and apply certain transformations on it.
• An empty list is created, and the training data is cross-validated 10 time, that is specified by the value ‘n_splits in the KFold’ function.
• Using this method, the training and testing is done by splitting up the training dataset 10 times.
• After this, the indices of the training and test dataset is printed on the screen.
• The first row gives the r squared value. This value helps understand how closely the data has been fit to the line.
• The mean of this r squared value is printed in the end.
• Instead of using ‘mean’ function, other functions like ‘cross_val_score’ or ‘cross_val_predict’ also can be used.

### Using ‘cross_val_predict’

Here, we are importing the cross_val_predict present in scikit-learn package:

from sklearn.model_selection import cross_val_predict
print(“The cross validation prediction is “)
cross_val_predict(best_svr, X, y, cv=10) 

Output:

The cross validation prediction is array([25.36718928, 23.06977613, 25.868393  , 26.4326278 , 25.17432617,
25.24206729, 21.18313164, 17.3573978 , 12.07022251, 18.5012095 ,
16.63900232, 20.69694063, 19.29837052, 23.51331985, 22.37909146,
23.39547762, 24.4107798 , 19.83066293, 21.54450501, 21.78701492,
16.23568134, 20.3075875 , 17.49385015, 16.87740936, 18.90126376, …]) 

The above output displays the cross validation prediction in an array.

Code explanation

The ‘cross_val_predict’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation value.

### Using ‘cross_val_score’

Here, we are importing the cross_val_score present in scikit-learn package:

from sklearn.model_selection import cross_val_score
print(“The cross validation score is “)
cross_val_predict(best_svr, X, y, cv=10)

Output:

The cross validation score is ([ ….. 21.8481575 , 19.10423341, 21.23362906, 17.88772136, 14.76265616,
12.19848284, 17.88647891, 20.5752906 , 21.34495122, 20.42084675,
18.14197483, 16.19999662, 20.12750527, 20.81205896, 19.56085546,
19.99908337, 22.70619515, 23.04620323, 24.98168712, 24.51166359,
23.7297288 ])

### Code explanation

The ‘cross_val_score’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation score.

## Variations of cross-validation

There are variations to cross validations, and they are used in relevant situation. The most commonly used one is the ‘k’-fold cross-validation. Others have been listed below:

• Leave one out cross validation
• Leave ‘p’ out cross validation
• ‘k’ fold cross validation
• Holdout method

## Conclusion

Hence, in this post, we saw how ‘k’ fold cross validation eliminates the need to procure a validation dataset and how a part of the training dataset itself can be used as a validation set, thereby not affecting the separate testing dataset. We also saw the concepts of underfitting and overfitting, and how important it is for the model to fit just-right, with the concept of “Hyperparameter” as well. We saw how the ‘k’ fold cross-validation is implemented in Python using scikit-learn and how it affects the performance of the model.

### KnowledgeHut

Author

KnowledgeHut is an outcome-focused global ed-tech company. We help organizations and professionals unlock excellence through skills development. We offer training solutions under the people and process, data science, full-stack development, cybersecurity, future technologies and digital transformation verticals.
Website : https://www.knowledgehut.com

## A Peak Into the World of Data Science

Rated 4.0/5 based on 10 customer reviews
9828
A Peak Into the World of Data Science

Touted as the sexiest job in the 21st century, ba... Read More

## Activation Functions for Deep Neural Networks

Rated 4.0/5 based on 15 customer reviews
8480
Activation Functions for Deep Neural Networks

The Universal Approximation Theorem Any predictiv... Read More

## Data Science: Correlation vs Regression in Statistics

In this article, we will understand the key differences between correlation and regression, and their significance. Correlation and regression are two different types of analyses that are performed on multi-variate distributions of data. They are mathematical concepts that help in understanding the extent of the relation between two variables: and the nature of the relationship between the two variables respectively. Correlation Correlation, as the name suggests is a word formed by combining ‘co’ and ‘relation’. It refers to the analysis of the relationship that is established between two variables in a given dataset. It helps in understanding (or measuring) the linear relationship between two variables.  Two variables are said to be correlated when a change in the value of one variable results in a corresponding change in the value of the other variable. This could be a direct or an indirect change in the value of variables. This indicates a relationship between both the variables.  Correlation is a statistical measure that deals with the strength of the relation between the two variables in question.  Correlation can be a positive or negative value. Positive Correlation Two variables are considered to be positively correlated when the value of one variable increases or decreases following an increase or decrease in the value of the other variable respectively.  Let us understand this better with the help of an example: Suppose you start saving your money in a bank, and they offer some amount of interest on the amount you save in the bank. The more the amount you store in the bank, the more interest you get on your money. This way, the money stored in a bank and the interest obtained on it are positively correlated. Let us take another example: While investing in stocks, it is usually said that higher the risk while investing in a stock, higher is the rate of returns on such stocks.  This shows a direct inverse relationship between the two variables since both of them increase/decrease when the other variable increases/decreases respectively. Negative Correlation Two variables are considered to be negatively correlated when the value of one variable increases following a decrease in the value of the other variable. Let us understand this with an example: Suppose a person is looking to lose weight. The one basic idea behind weight loss is reducing the number of calorie intake. When fewer calories are consumed and a significant number of calories are burnt, the rate of weight loss is quicker. This means when the amount of junk food eaten is decreased, weight loss increases. Let us take another example: Suppose a popular non-essential product that is being sold faces an increase in the price. When this happens, the number of people who purchase it will reduce and the demand would also reduce. This means, when the popularity and price of the product increases, the demand for the product reduces. An inverse proportion relationship is observed between the two variables since one value increases and the other value decreases or one value decreases and the other value increases.  Zero Correlation This indicates that there is no relationship between two variables. It is also known as a zero correlation. This is when a change in one variable doesn't affect the other variable in any way. Let us understand this with the help of an example: When the increase in height of our friend/neighbour doesn’t affect our height, since our height is independent of our friend’s height.  Correlation is used when there is a requirement to see if the two variables that are being worked upon are related to each other, and if they are, what the extent of this relationship is, and whether the values are positively or negatively correlated.  Pearson’s correlation coefficient is a popular measure to understand the correlation between two values.  Regression Regression is the type of analysis that helps in the prediction of a dependant value when the value of the independent variable is given. For example, given a dataset that contains two variables (or columns, if visualized as a table), a few rows of values for both the variables would be given. One or more of one of the variables (or column) would be missing, that needs to be found out. One of the variables would depend on the other, thereby forming an equation that relevantly represents the relationship between the two variables. Regression helps in predicting the missing value. Note: The idea behind any regression technique is to ensure that the difference between the predicted and the actual value is minimal, thereby reducing the error that occurs during the prediction of the dependent variable with the help of the independent variable. There are different types of regression and some of them have been listed below: Linear Regression This is one of the basic kinds of regression, which usually involves two variables, where one variable is known as the ‘dependent’ variable and the other one is known as an ‘independent’ variable. Given a dataset, a pattern has to be formed (linear equation) with the help of these two variables and this equation has to be used to fit the given data to a straight line. This straight-line needs to be used to predict the value for a given variable. The predicted values are usually continuous. Logistic Regression There are different types of logistic regression:  Binary logistic regression is a regression technique wherein there are only two types or categories of input that are possible, i.e 0 or 1, yes or no, true or false and so on. Multinomial logistic regression helps predict output wherein the outcome would belong to one of the more than two classes or categories. In other words, this algorithm is used to predict a nominal dependent variable. Ordinal logistic regression deals with dependant variables that need to be ranked while predicting it with the help of independent variables.  Ridge Regression It is also known as L2 regularization. It is a regression technique that helps in finding the best coefficients for a linear regression model with the help of an estimator that is known as ridge estimator. It is used in contrast to the popular ordinary least square method since the former has low variance and hence it calculates better coefficients. It doesn’t eliminate coefficients thereby not producing sparse, simple models.  Lasso Regression LASSO is an acronym that stands for ‘Least Absolute Shrinkage and Selection Operator’. It is a type of linear regression that uses the concept of ‘shrinkage’. Shrinkage is a process with the help of which values in a data set are reduced/shrunk to a certain base point (this could be mean, median, etc). It helps in creating simple, easy to understand, sparse models, i.e the models that have fewer parameters to deal with, thereby being simple.  Lasso regression is highly suited for models that have high collinearity levels, i.e a model where certain processes (such as model selection or parameter selection or variable selection) is automated.  It is used to perform L1 and L2 regularization. L1 regularization is a technique that adds a penalty to the given values of coefficients in the equation. This also results in simple, easy to use, sparse models that would contain lesser coefficients. Some of these coefficients can also be estimated off to 0 and hence eliminated from the model altogether. This way, the model becomes simple.  It is said that Lasso regression is easier to work with and understand in comparison to ridge regression.  There are significant differences between both these statistical concepts.  Difference between Correlation and Regression Let us summarize the difference between correlation and regression with the help of a table: CorrelationRegressionThere are two variables, and their relationship is understood and measured.Two variables are represented as 'dependent' and 'independent' variables, and the dependent variable is predicted.The relationship between the two variables is analysed.This concept tells about how one variable affects the other and tries to predict the dependant variable.The relationship between two variables (say ‘x’ and ‘y’) is the same if it is expressed as ‘x is related to y’ or ‘y is related to x’.There is a significant difference when we say ‘x depends on y’ and ‘y depends on x’. This is because the independent and dependent variables change.Correlation between two variables can be expressed through a single point on a graph, visually.A line or a curve is fitted to the given data, and the line or the curve is extrapolated to predict the data and make sure the line or the curve fits the data on the graph.It is a numerical value that tells about the strength of the relation between two variables.It predicts one variable based on the independent variables. (this predicted value can be continuous or discrete, depending on the type of regression) by fitting a straight line to the data.Conclusion In this article, we understood the significant differences between two statistical techniques, namely- correlation and regression with the help of examples. Correlation establishes a relationship between two variables whereas regression deals with the prediction of values and curve fitting.
Rated 4.0/5 based on 14 customer reviews
9827
Data Science: Correlation vs Regression in Statist...