For enquiries call:

Phone

+1-469-442-0620

April flash sale-mobile

HomeBlogData ScienceThe K-Fold Cross Validation in Machine Learning

The K-Fold Cross Validation in Machine Learning

Published
05th Sep, 2023
Views
view count loader
Read it in
13 Mins
In this article
    The K-Fold Cross Validation in Machine Learning

    Takeaways from the article 

    • This article will cover one of the most important concepts - the ‘k’ fold cross validation in Machine Learning. 
    • This article discusses how cross validation works, and why it is important, and how ‘underfitting’ or ‘overfitting’ or 'just the right fit’ affects the output data. 
    • Along with overfitting, we will discuss what is Hyperparameterand how is a hyperparameter for a given model decided and how we can check the value of a hyperparameter. 
    • This article also covers applications of ‘k’ fold cross validations, how to choose the value of ‘k’ and its applications. The concept of variations in cross-validation is also covered. 
    • We will also discuss the implementation in Python along with code explanation.

    Machine learning algorithms, apart from many uses, are also used to extract patterns from data or predict certain continuous or discrete values. It is also important to understand that the model that is built with respect to the data is just right - it doesn’t overfit or underfit. ‘Overfitting’ and ‘underfitting’ are two concepts in Machine Learning that deal with how well the data has been trained and how accurately the data has been predicted. The Overfitting includes a value Hyperparameter, to see how the algorithm behaves.

    Underfitting in Machine Learning

    Given a dataset, and an appropriate algorithm to train the dataset with, if the model fits the dataset rightly, it will be able to give accurate predictions on never-seen-before data. On the other hand, if the Machine Learning model hasn’t been trained properly on the given data due to various reasons, the model will not be able to make accurate or nearly good predictions on the data. 

    This is because the model would have failed to capture the essential patterns from the data. If a model that is being trained is stopped prematurely, it can lead to underfitting. This means data won’t be trained for the right amount of time, due to which it won’t be able to perform well with the new data. This would lead to the model not being able to give good results, and they could not be relied upon.  

    The dashed line in blue is the model that underfits the data. The black parabola is the line of data points that fits the model well.  

    The K-Fold Cross Validation in Machine Learning

    Overfitting in Machine Learning 

    This is just the opposite of underfitting. It means that instead of extracting the patterns or learning the data just right, it learns too much. This means all the data is basically captured, including noise (irrelevant data, that wouldn’t contribute to the prediction of output when new data is encountered) thereby leading to not being able to generalize the model to new data.

    The K-Fold Cross Validation in Machine Learning

    The model, during training, performs well, and learns all data points, literally memorizing the data that it has been given. But when it is in the testing phase or a new data point is introduced to it, it fails miserably. The new data point will not be captured by an overfit machine learning model.   

    Note: In general, more the data, better the training, leading to better prediction results. But it should also be made sure that the model is not just capturing all points, instead it is learning, thereby removing the noise present in the data.   

    Before exposing the model to the real world, the training data is divided into two parts. One is called the ‘training set’ and the other is known as the ‘test set’. Once the training is completed on the training dataset, the test set is exposed to the model to see how it behaves with newly encountered data. This gives a sufficient idea about how accurately the model can work with new data, and its accuracy.   

    Hyperparameter

    Hyperparameters are values that are used in Machine Learning, set by the user by performing a few trails, to see how the algorithm behaves. Some examples include ridge regression and gradient descent. 

    How is a hyperparameter for a given model decided? 

    Hyperparameters depend on various factors like the algorithm in hand, the data provided, and so on. The optimal value of hyperparameter can be found only through trial and error method. This method is known as hyperparameter tuning.  

    To check the value of the hyperparameter, and to tune the hyperparameter, the test set (the data set which is used to see how the model works on new data) is constantly used, thereby making the model develop an affinity to the test data set. When this happens, the test set almost becomes the training set, and the test data set can’t be used to see how well the model generalizes to new data.  

    To overcome this situation, the original dataset is split into 3 different sets- ‘training dataset’, ‘validation dataset’, and ‘test dataset’.  

    • Training dataset: This is used as a parameter to the given machine learning algorithm, to be trained upon.  
    • Validation dataset: This dataset is used to evaluate the model, i.e hyperparameter tuning. Later the result is checked, and if the results are not appropriate, the hyperparameter value can be changed and it can be tested on the validation set. This way, the model would not be exposed to the test set, thereby preserving the sanctity of the model’s ability.  
    • Testing dataset: This dataset is used to see how the model performs on new data.  

    Disadvantages of randomly dividing the dataset into three different parts: 

    • Some parts of the dataset may have a large number of a specific type of data.  This way, during training, essential patterns may be missed out.  
    • The number of samples in the training set will reduce since the data will be divided into three parts.  

    The solution to the above issues is to use cross-validation.  

    Cross-validation 

    It is a process in which the original dataset is divided is divided into two parts only- the ‘training dataset’ and the ‘testing dataset’.       

    The need of a ‘validation dataset’ is eliminated when cross-validation comes into the picture.  

    There are many variations of the ‘cross-validation’ method, and the most commonly used one is known as ‘k’ fold cross-validation.  

    Steps in ‘k’ fold cross-validation 

    • In this method, the training dataset will be split into multiple ‘k’ smaller parts/sets. Hence the name ‘k’-fold.  
    • The current training dataset would now be divided into ‘k’ parts, out of which one dataset is left out and the remaining ‘k-1’ datasets are used to train the model.  
    • This is done multiple number of times. The number of times that it has to be done is mentioned by the user in the code.  
    • The one that was kept out of the training is used as a ‘validation dataset’. This can be used to tune hyperparameters and see how the model performs and change the values accordingly, to yield better results.  
    • Even though the size of the dataset isn’t reduced considerably, it was reduced to a certain extent. This method also makes sure that the model remains robust and generalizes well on the data.  

    Steps in ‘k’ fold cross-validation

    Steps in ‘k’ fold cross-validation 

    The above image can be used as a representation of cross validation. Once the part of the training set is checked to find the best hyperparameter, and the best hyperparameter/s are found, this new data is again sent to the model to be retrained. The model will also have the knowledge of the old training data, and along with it, it may give better results, and it can be tested by seeing how new data performs on the testing set of this model.    

    How is the value of ‘k’ decided?   

    This depends on the data in hand. It is a trial and error method in which the value is chosen. Usually it is taken as 10 which is completely arbitrary. A large value for ‘k’ indicates less bias, and high variance. Also, this means more data samples can be used to give a better, and precise outcomes.

    Code for ‘k’ fold cross-validation  

    Data required to understand ‘k’ fold cross validation can be  taken/copied from the    below location:   

    This data can be pasted into a CSV file and the below code can be executed. Make sure to give heading to all the columns.     

    from sklearn.model_selection import KFold 
    from sklearn.preprocessing import MinMaxScaler 
    from sklearn.svm import SVR 
    from sklearn.model_selection import cross_val_predict 
    from sklearn.model_selection import cross_val_score 
     
    import numpy as np 
    import pandas as pd 
     
    dataset = pd.read_csv("path-to.csvfile in your workstation") 
    X = dataset.iloc[:, [0, 12]] 
    y = dataset.iloc[:, 13] 
    scaler = MinMaxScaler(feature_range=(0, 1)) 
    X = scaler.fit_transform(X) 
    my_scores = [] 
    best_svr = SVR(kernel='rbf') 
    cv = KFold(n_splits=10, random_state=42, shuffle=False) 
     
    for train_index, test_index in cv.split(X): 
        print("Training data index: ", train_index, "\n") 
        print("Test data index: ", test_index) 
     
        X_train, X_test, y_train, y_test = X[train_index], X[test_index], y[train_index], y[test_index] 
        best_svr.fit(X_train, y_train) 
        my_scores.append(best_svr.score(X_test, y_test)) 
        best_svr.fit(X_train, y_train) 
        my_scores.append(best_svr.score(X_test, y_test)) 
    print("The mean value is" ) 
    print(np.mean(my_scores)) 
    #or 
    cross_val_score(best_svr, X, y, cv=10) 
    #(or) 
    cross_val_predict(best_svr, X, y, cv=10)

    Output: 

    Training data index:  [  0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17 
      18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35 
      36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53 
      54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71 
      72  73  74  75  76  77  78  79  80  81  82  83  84 170 171 172 173 174 
     175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 
     193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 
     211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 
     229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 
     247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 
     265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 
     283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 
     301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 
     319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 
     337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 
     355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 
     373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 
     391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 
     409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 
     427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 
     445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 
     463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 
     481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 
     499 500 501 502 503 504 505]  
     
    Test data index:  [ 85  86  87  88  89  90  91  92  93  94  95  96  97  98  99 100 101 102 
     103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 
     121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 
     139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 
     157 158 159 160 161 162 163 164 165 166 167 168 169] 
     
    The mean value is 
    0.28180923811255787 

    Note:

    • This is just one split, and this output is repeated as many number of times as we have mentioned in the ‘n_splits’. 
    • The ‘KFold’ function returns the index of the data points. Hence, if a user wishes to see the values placed in those indices, they have to be appropriately accessed.  

    Instead of using ‘mean’ to find r-squared value, the ‘cross_val_predict’ or ‘cross_val_score’ present in the scikit-learn package (model_selection) can also be used. The ‘cross_val_predict’ will give the predictions from the dataset when every split is made during the training. On the other hand, the ‘cross_val_score’ gives the r-squared value using cross-validation.   

    Code explanation 

    • The packages that are necessary to work with ‘k’ fold cross validation are imported using the ‘import’ keyword.  
    • The data in the form of a csv file needs to be brought into the Python environment.  
    • Hence, the ‘read_csv’ function, present in the ‘pandas’ package is used to read the CSV file and convert it into a dataframe (pandas data structure).  
    • Then, certain columns are assigned to variables ‘X’ and ‘y’ respectively, and the MinMaxScaler function is used to fit the data to the model and apply certain transformations on it.  
    • An empty list is created, and the training data is cross-validated 10 time, that is specified by the value ‘n_splits in the KFold’ function.  
    • Using this method, the training and testing is done by splitting up the training dataset 10 times. 
    • After this, the indices of the training and test dataset is printed on the screen.  
    • The first row gives the r squared value. This value helps understand how closely the data has been fit to the line. 
    • The mean of this r squared value is printed in the end.
    • Instead of using ‘mean’ function, other functions like ‘cross_val_score’ or ‘cross_val_predict’ also can be used.

    Using ‘cross_val_predict 

    Here, we are importing the cross_val_predict present in scikit-learn package:  

    from sklearn.model_selection import cross_val_predict 
    print(“The cross validation prediction is “) 
    cross_val_predict(best_svr, X, y, cv=10) 

    Output:  

    The cross validation prediction is array([25.36718928, 23.06977613, 25.868393  , 26.4326278 , 25.17432617, 
           25.24206729, 21.18313164, 17.3573978 , 12.07022251, 18.5012095 , 
           16.63900232, 20.69694063, 19.29837052, 23.51331985, 22.37909146, 
           23.39547762, 24.4107798 , 19.83066293, 21.54450501, 21.78701492, 
           16.23568134, 20.3075875 , 17.49385015, 16.87740936, 18.90126376, …]) 

    The above output displays the cross validation prediction in an array. 

    Code explanation 

    The ‘cross_val_predict’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation value.  

    Using ‘cross_val_score  

    Here, we are importing the cross_val_score present in scikit-learn package:

    from sklearn.model_selection import cross_val_score 
    print(“The cross validation score is “) 
    cross_val_predict(best_svr, X, y, cv=10)

    Output:

    The cross validation score is ([ ….. 21.8481575 , 19.10423341, 21.23362906, 17.88772136, 14.76265616, 
           12.19848284, 17.88647891, 20.5752906 , 21.34495122, 20.42084675, 
           18.14197483, 16.19999662, 20.12750527, 20.81205896, 19.56085546, 
           19.99908337, 22.70619515, 23.04620323, 24.98168712, 24.51166359, 
           23.7297288 ])

    Code explanation

    The ‘cross_val_score’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation score. 

    Variations of cross-validation  

    There are variations to cross validations, and they are used in relevant situation. The most commonly used one is the ‘k’-fold cross-validation. Others have been listed below:    

    • Leave one out cross validation 
    • Leave ‘p’ out cross validation  
    • ‘k’ fold cross validation 
    • Holdout method 

    Conclusion

    Hence, in this post, we saw how ‘k’ fold cross validation eliminates the need to procure a validation dataset and how a part of the training dataset itself can be used as a validation set, thereby not affecting the separate testing dataset. We also saw the concepts of underfitting and overfitting, and how important it is for the model to fit just-right, with the concept of “Hyperparameter” as well. We saw how the ‘k’ fold cross-validation is implemented in Python using scikit-learn and how it affects the performance of the model.

    Profile

    KnowledgeHut .

    Author

    KnowledgeHut is an outcome-focused global ed-tech company. We help organizations and professionals unlock excellence through skills development. We offer training solutions under the people and process, data science, full-stack development, cybersecurity, future technologies and digital transformation verticals.

    Share This Article
    Ready to Master the Skills that Drive Your Career?

    Avail your free 1:1 mentorship session.

    Select
    Your Message (Optional)

    Upcoming Data Science Batches & Dates

    NameDateFeeKnow more
    Course advisor icon
    Course Advisor
    Whatsapp/Chat icon