Search

The K-Fold Cross Validation in Machine Learning

Takeaways from the article This article will cover one of the most important concepts - the ‘k’ fold cross validation in Machine Learning. This article discusses how cross validation works, and why it is important, and how ‘underfitting’ or ‘overfitting’ or 'just the right fit’ affects the output data. Along with overfitting, we will discuss what is Hyperparameter, and how is a hyperparameter for a given model decided and how we can check the value of a hyperparameter. This article also covers applications of ‘k’ fold cross validations, how to choose the value of ‘k’ and its applications. The concept of variations in cross-validation is also covered. We will also discuss the implementation in Python along with code explanation. Introduction Machine learning algorithms, apart from many uses, are also used to extract patterns from data or predict certain continuous or discrete values. It is also important to understand that the model that is built with respect to the data is just right - it doesn’t overfit or underfit. ‘Overfitting’ and ‘underfitting’ are two concepts in Machine Learning that deal with how well the data has been trained and how accurately the data has been predicted. The Over fitting includes a value Hyperparameter, to see how the algorithm behaves.Underfitting in Machine LearningGiven a dataset, and an appropriate algorithm to train the dataset with, if the model fits the dataset rightly, it will be able to give accurate predictions on never-seen-before data.  On the other hand, if the Machine Learning model hasn’t been trained properly on the given data due to various reasons, the model will not be able to make accurate or nearly good predictions on the data. This is because the model would have failed to capture the essential patterns from the data.  If a model that is being trained is stopped prematurely, it can lead to underfitting. This means data won’t be trained for the right amount of time, due to which it won’t be able to perform well with the new data. This would lead to the model not being able to give good results, and they could not be relied upon.  The dashed line in blue is the model that underfits the data. The black parabola is the line of data points that fits the model well.  Overfitting in Machine Learning This is just the opposite of underfitting. It means that instead of extracting the patterns or learning the data just right, it learns too much. This means all the data is basically captured, including noise (irrelevant data, that wouldn’t contribute to the prediction of output when new data is encountered) thereby leading to not being able to generalize the model to new data.The model, during training, performs well, and learns all data points, literally memorizing the data that it has been given. But when it is in the testing phase or a new data point is introduced to it, it fails miserably. The new data point will not be captured by an overfit machine learning model.   Note: In general, more the data, better the training, leading to better prediction results. But it should also be made sure that the model is not just capturing all points, instead it is learning, thereby removing the noise present in the data.   Before exposing the model to the real world, the training data is divided into two parts. One is called the ‘training set’ and the other is known as the ‘test set’. Once the training is completed on the training dataset, the test set is exposed to the model to see how it behaves with newly encountered data. This gives a sufficient idea about how accurately the model can work with new data, and its accuracy.   HyperparameterHyperparameters are values that are used in Machine Learning, set by the user by performing a few trails, to see how the algorithm behaves. Some examples include ridge regression and gradient descent. How is a hyperparameter for a given model decided? Hyperparameters depend on various factors like the algorithm in hand, the data provided, and so on. The optimal value of hyperparameter can be found only through trial and error method. This method is known as hyperparameter tuning.  To check the value of the hyperparameter, and to tune the hyperparameter, the test set (the data set which is used to see how the model works on new data) is constantly used, thereby making the model develop an affinity to the test data set. When this happens, the test set almost becomes the training set, and the test data set can’t be used to see how well the model generalizes to new data.  To overcome this situation, the original dataset is split into 3 different sets- ‘training dataset’, ‘validation dataset’, and ‘test dataset’.  Training dataset: This is used as a parameter to the given machine learning algorithm, to be trained upon.  Validation dataset: This dataset is used to evaluate the model, i.e hyperparameter tuning. Later the result is checked, and if the results are not appropriate, the hyperparameter value can be changed and it can be tested on the validation set. This way, the model would not be exposed to the test set, thereby preserving the sanctity of the model’s ability.  Testing dataset: This dataset is used to see how the model performs on new data.  Disadvantages of randomly dividing the dataset into three different parts: Some parts of the dataset may have a large number of a specific type of data.  This way, during training, essential patterns may be missed out.  The number of samples in the training set will reduce since the data will be divided into three parts.  The solution to the above issues is to use cross-validation.  Cross-validation It is a process in which the original dataset is divided is divided into two parts only- the ‘training dataset’ and the ‘testing dataset’.       The need of a ‘validation dataset’ is eliminated when cross-validation comes into the picture.  There are many variations of the ‘cross-validation’ method, and the most commonly used one is known as ‘k’ fold cross-validation.  Steps in ‘k’ fold cross-validation In this method, the training dataset will be split into multiple ‘k’ smaller parts/sets. Hence the name ‘k’-fold.  The current training dataset would now be divided into ‘k’ parts, out of which one dataset is left out and the remaining ‘k-1’ datasets are used to train the model.  This is done multiple number of times. The number of times that it has to be done is mentioned by the user in the code.  The one that was kept out of the training is used as a ‘validation dataset’. This can be used to tune hyperparameters and see how the model performs and change the values accordingly, to yield better results.  Even though the size of the dataset isn’t reduced considerably, it was reduced to a certain extent. This method also makes sure that the model remains robust and generalizes well on the data.  Steps in ‘k’ fold cross-validation The above image can be used as a representation of cross validation. Once the part of the training set is checked to find the best hyperparameter, and the best hyperparameter/s are found, this new data is again sent to the model to be retrained. The model will also have the knowledge of the old training data, and along with it, it may give better results, and it can be tested by seeing how new data performs on the testing set of this model.     How is the value of ‘k’ decided?   This depends on the data in hand. It is a trial and error method in which the value is chosen. Usually it is taken as 10 which is completely arbitrary. A large value for ‘k’ indicates less bias, and high variance. Also, this means more data samples can be used to give a better, and precise outcomes.Code for ‘k’ fold cross-validation  Data required to understand ‘k’ fold cross validation can be  taken/copied from the below location:     https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.data This data can be pasted into a CSV file and the below code can be executed. Make sure to give heading to all the columns.     from sklearn.model_selection import KFold  from sklearn.preprocessing import MinMaxScaler  from sklearn.svm import SVR  from sklearn.model_selection import cross_val_predict  from sklearn.model_selection import cross_val_score    import numpy as np  import pandas as pd    dataset = pd.read_csv("path-to.csvfile in your workstation")  X = dataset.iloc[:, [0, 12]]  y = dataset.iloc[:, 13]  scaler = MinMaxScaler(feature_range=(0, 1))  X = scaler.fit_transform(X)  my_scores = []  best_svr = SVR(kernel='rbf')  cv = KFold(n_splits=10, random_state=42, shuffle=False)    for train_index, test_index in cv.split(X):      print("Training data index: ", train_index, "\n")      print("Test data index: ", test_index)        X_train, X_test, y_train, y_test = X[train_index], X[test_index], y[train_index], y[test_index]      best_svr.fit(X_train, y_train)      my_scores.append(best_svr.score(X_test, y_test))      best_svr.fit(X_train, y_train)      my_scores.append(best_svr.score(X_test, y_test))  print("The mean value is" )  print(np.mean(my_scores))  #or  cross_val_score(best_svr, X, y, cv=10)  #(or)  cross_val_predict(best_svr, X, y, cv=10)Output: Training data index:  [  0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17    18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35    36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53    54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71    72  73  74  75  76  77  78  79  80  81  82  83  84 170 171 172 173 174   175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192   193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210   211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228   229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246   247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264   265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282   283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300   301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318   319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336   337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354   355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372   373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390   391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408   409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426   427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444   445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462   463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480   481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498   499 500 501 502 503 504 505]     Test data index:  [ 85  86  87  88  89  90  91  92  93  94  95  96  97  98  99 100 101 102   103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120   121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138   139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156   157 158 159 160 161 162 163 164 165 166 167 168 169]    The mean value is  0.28180923811255787 Note:This is just one split, and this output is repeated as many number of times as we have mentioned in the ‘n_splits’. The ‘KFold’ function returns the index of the data points. Hence, if a user wishes to see the values placed in those indices, they have to be appropriately accessed.  Instead of using ‘mean’ to find r-squared value, the ‘cross_val_predict’ or ‘cross_val_score’ present in the ‘scikit-learn’ package (model_selection) can also be used. The ‘cross_val_predict’ will give the predictions from the dataset when every split is made during the training. On the other hand, the ‘cross_val_score’ gives the r-squared value using cross-validation.   Code explanation The packages that are necessary to work with ‘k’ fold cross validation are imported using the ‘import’ keyword.  The data in the form of a csv file needs to be brought into the Python environment.  Hence, the ‘read_csv’ function, present in the ‘pandas’ package is used to read the CSV file and convert it into a dataframe (pandas data structure).  Then, certain columns are assigned to variables ‘X’ and ‘y’ respectively, and the MinMaxScaler function is used to fit the data to the model and apply certain transformations on it.  An empty list is created, and the training data is cross-validated 10 time, that is specified by the value ‘n_splits in the KFold’ function.  Using this method, the training and testing is done by splitting up the training dataset 10 times. After this, the indices of the training and test dataset is printed on the screen.  The first row gives the r squared value. This value helps understand how closely the data has been fit to the line.  The mean of this r squared value is printed in the end.   Instead of using ‘mean’ function, other functions like ‘cross_val_score’ or ‘cross_val_predict’ also can be used.  Using ‘cross_val_predict’ Here, we are importing the cross_val_predict present in scikit-learn package:  from sklearn.model_selection import cross_val_predict  print(“The cross validation prediction is “)  cross_val_predict(best_svr, X, y, cv=10) Output:  The cross validation prediction is array([25.36718928, 23.06977613, 25.868393  , 26.4326278 , 25.17432617,         25.24206729, 21.18313164, 17.3573978 , 12.07022251, 18.5012095 ,         16.63900232, 20.69694063, 19.29837052, 23.51331985, 22.37909146,         23.39547762, 24.4107798 , 19.83066293, 21.54450501, 21.78701492,         16.23568134, 20.3075875 , 17.49385015, 16.87740936, 18.90126376, …]) The above output displays the cross validation prediction in an array. Code explanation The ‘cross_val_predict’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation value.  Using ‘cross_val_score’  Here, we are importing the cross_val_score present in scikit-learn package:from sklearn.model_selection import cross_val_score  print(“The cross validation score is “)  cross_val_predict(best_svr, X, y, cv=10)Output:The cross validation score is ([ ….. 21.8481575 , 19.10423341, 21.23362906, 17.88772136, 14.76265616,         12.19848284, 17.88647891, 20.5752906 , 21.34495122, 20.42084675,         18.14197483, 16.19999662, 20.12750527, 20.81205896, 19.56085546,         19.99908337, 22.70619515, 23.04620323, 24.98168712, 24.51166359,         23.7297288 ])Code explanationThe ‘cross_val_score’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation score. Variations of cross-validation  There are variations to cross validations, and they are used in relevant situation. The most commonly used one is the ‘k’-fold cross-validation. Others have been listed below:    Leave one out cross validation Leave ‘p’ out cross validation  ‘k’ fold cross validation Holdout method Conclusion Hence, in this post, we saw how ‘k’ fold cross validation eliminates the need to procure a validation dataset and how a part of the training dataset itself can be used as a validation set, thereby not affecting the separate testing dataset. We also saw the concepts of underfitting and overfitting, and how important it is for the model to fit just-right, with the concept of “Hyperparameter” as well. We saw how the ‘k’ fold cross-validation is implemented in Python using scikit-learn and how it affects the performance of the model.  
Rated 4.0/5 based on 11 customer reviews

The K-Fold Cross Validation in Machine Learning

7K
The K-Fold Cross Validation in Machine Learning

Takeaways from the article 

  • This article will cover one of the most important concepts - the ‘k’ fold cross validation in Machine Learning. 
  • This article discusses how cross validation works, and why it is important, and how ‘underfitting’ or ‘overfitting’ or 'just the right fit’ affects the output data. 
  • Along with overfitting, we will discuss what is Hyperparameterand how is a hyperparameter for a given model decided and how we can check the value of a hyperparameter. 
  • This article also covers applications of ‘k’ fold cross validations, how to choose the value of ‘k’ and its applications. The concept of variations in cross-validation is also covered. 
  • We will also discuss the implementation in Python along with code explanation. 

Introduction 

Machine learning algorithms, apart from many uses, are also used to extract patterns from data or predict certain continuous or discrete values. It is also important to understand that the model that is built with respect to the data is just right - it doesn’t overfit or underfit. ‘Overfitting’ and ‘underfitting’ are two concepts in Machine Learning that deal with how well the data has been trained and how accurately the data has been predicted. The Over fitting includes a value Hyperparameter, to see how the algorithm behaves.

Underfitting in Machine Learning

Given a dataset, and an appropriate algorithm to train the dataset with, if the model fits the dataset rightly, it will be able to give accurate predictions on never-seen-before data.  

On the other hand, if the Machine Learning model hasn’t been trained properly on the given data due to various reasons, the model will not be able to make accurate or nearly good predictions on the data. 

This is because the model would have failed to capture the essential patterns from the data.  

If a model that is being trained is stopped prematurely, it can lead to underfitting. This means data won’t be trained for the right amount of time, due to which it won’t be able to perform well with the new data. This would lead to the model not being able to give good results, and they could not be relied upon.  

The dashed line in blue is the model that underfits the data. The black parabola is the line of data points that fits the model well.  The K-Fold Cross Validation in Machine Learning

Overfitting in Machine Learning 

This is just the opposite of underfitting. It means that instead of extracting the patterns or learning the data just right, it learns too much. This means all the data is basically captured, including noise (irrelevant data, that wouldn’t contribute to the prediction of output when new data is encountered) thereby leading to not being able to generalize the model to new data.The K-Fold Cross Validation in Machine LearningThe model, during training, performs well, and learns all data points, literally memorizing the data that it has been given. But when it is in the testing phase or a new data point is introduced to it, it fails miserably. The new data point will not be captured by an overfit machine learning model.   

Note: In general, more the data, better the training, leading to better prediction results. But it should also be made sure that the model is not just capturing all points, instead it is learning, thereby removing the noise present in the data.   

Before exposing the model to the real world, the training data is divided into two parts. One is called the ‘training set’ and the other is known as the ‘test set’. Once the training is completed on the training dataset, the test set is exposed to the model to see how it behaves with newly encountered data. This gives a sufficient idea about how accurately the model can work with new data, and its accuracy.   

Hyperparameter

Hyperparameters are values that are used in Machine Learning, set by the user by performing a few trails, to see how the algorithm behaves. Some examples include ridge regression and gradient descent. 

How is a hyperparameter for a given model decided? 

Hyperparameters depend on various factors like the algorithm in hand, the data provided, and so on. The optimal value of hyperparameter can be found only through trial and error method. This method is known as hyperparameter tuning.  

To check the value of the hyperparameter, and to tune the hyperparameter, the test set (the data set which is used to see how the model works on new data) is constantly used, thereby making the model develop an affinity to the test data set. When this happens, the test set almost becomes the training set, and the test data set can’t be used to see how well the model generalizes to new data.  

To overcome this situation, the original dataset is split into 3 different sets- ‘training dataset’, ‘validation dataset’, and ‘test dataset’.  

  • Training dataset: This is used as a parameter to the given machine learning algorithm, to be trained upon.  
  • Validation dataset: This dataset is used to evaluate the model, i.e hyperparameter tuning. Later the result is checked, and if the results are not appropriate, the hyperparameter value can be changed and it can be tested on the validation set. This way, the model would not be exposed to the test set, thereby preserving the sanctity of the model’s ability.  
  • Testing dataset: This dataset is used to see how the model performs on new data.  

Disadvantages of randomly dividing the dataset into three different parts: 

  • Some parts of the dataset may have a large number of a specific type of data.  This way, during training, essential patterns may be missed out.  
  • The number of samples in the training set will reduce since the data will be divided into three parts.  

The solution to the above issues is to use cross-validation.  

Cross-validation 

It is a process in which the original dataset is divided is divided into two parts only- the ‘training dataset’ and the ‘testing dataset’.       

The need of a ‘validation dataset’ is eliminated when cross-validation comes into the picture.  

There are many variations of the ‘cross-validation’ method, and the most commonly used one is known as ‘k’ fold cross-validation.  

Steps in ‘k’ fold cross-validation 

  • In this method, the training dataset will be split into multiple ‘k’ smaller parts/sets. Hence the name ‘k’-fold.  
  • The current training dataset would now be divided into ‘k’ parts, out of which one dataset is left out and the remaining ‘k-1’ datasets are used to train the model.  
  • This is done multiple number of times. The number of times that it has to be done is mentioned by the user in the code.  
  • The one that was kept out of the training is used as a ‘validation dataset’. This can be used to tune hyperparameters and see how the model performs and change the values accordingly, to yield better results.  
  • Even though the size of the dataset isn’t reduced considerably, it was reduced to a certain extent. This method also makes sure that the model remains robust and generalizes well on the data.  

Steps in ‘k’ fold cross-validation

Steps in ‘k’ fold cross-validation 

The above image can be used as a representation of cross validation. Once the part of the training set is checked to find the best hyperparameter, and the best hyperparameter/s are found, this new data is again sent to the model to be retrained. The model will also have the knowledge of the old training data, and along with it, it may give better results, and it can be tested by seeing how new data performs on the testing set of this model.     

How is the value of ‘k’ decided?   

This depends on the data in hand. It is a trial and error method in which the value is chosen. Usually it is taken as 10 which is completely arbitrary. A large value for ‘k’ indicates less bias, and high variance. Also, this means more data samples can be used to give a better, and precise outcomes.

Code for ‘k’ fold cross-validation  

Data required to understand ‘k’ fold cross validation can be  taken/copied from the below location:     

https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.data 

This data can be pasted into a CSV file and the below code can be executed. Make sure to give heading to all the columns.     

from sklearn.model_selection import KFold 
from sklearn.preprocessing import MinMaxScaler 
from sklearn.svm import SVR 
from sklearn.model_selection import cross_val_predict 
from sklearn.model_selection import cross_val_score 
 
import numpy as np 
import pandas as pd 
 
dataset = pd.read_csv("path-to.csvfile in your workstation") 
X = dataset.iloc[:, [0, 12]] 
y = dataset.iloc[:, 13] 
scaler = MinMaxScaler(feature_range=(0, 1)) 
X = scaler.fit_transform(X) 
my_scores = [] 
best_svr = SVR(kernel='rbf') 
cv = KFold(n_splits=10, random_state=42, shuffle=False) 
 
for train_index, test_index in cv.split(X): 
    print("Training data index: ", train_index, "\n") 
    print("Test data index: ", test_index) 
 
    X_train, X_test, y_train, y_test = X[train_index], X[test_index], y[train_index], y[test_index] 
    best_svr.fit(X_train, y_train) 
    my_scores.append(best_svr.score(X_test, y_test)) 
    best_svr.fit(X_train, y_train) 
    my_scores.append(best_svr.score(X_test, y_test)) 
print("The mean value is" ) 
print(np.mean(my_scores)) 
#or 
cross_val_score(best_svr, X, y, cv=10) 
#(or) 
cross_val_predict(best_svr, X, y, cv=10)

Output: 

Training data index:  [  0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17 
  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35 
  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53 
  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71 
  72  73  74  75  76  77  78  79  80  81  82  83  84 170 171 172 173 174 
 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 
 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 
 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 
 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 
 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 
 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 
 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 
 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 
 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 
 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 
 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 
 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 
 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 
 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 
 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 
 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 
 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 
 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 
 499 500 501 502 503 504 505]  
 
Test data index:  [ 85  86  87  88  89  90  91  92  93  94  95  96  97  98  99 100 101 102 
 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 
 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 
 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 
 157 158 159 160 161 162 163 164 165 166 167 168 169] 
 
The mean value is 
0.28180923811255787 

Note:

  • This is just one split, and this output is repeated as many number of times as we have mentioned in the ‘n_splits’. 
  • The ‘KFold’ function returns the index of the data points. Hence, if a user wishes to see the values placed in those indices, they have to be appropriately accessed.  

Instead of using ‘mean’ to find r-squared value, the ‘cross_val_predict’ or ‘cross_val_score’ present in the scikit-learn package (model_selection) can also be used. The ‘cross_val_predict’ will give the predictions from the dataset when every split is made during the training. On the other hand, the ‘cross_val_score’ gives the r-squared value using cross-validation.   

Code explanation 

  • The packages that are necessary to work with ‘k’ fold cross validation are imported using the ‘import’ keyword.  
  • The data in the form of a csv file needs to be brought into the Python environment.  
  • Hence, the ‘read_csv’ function, present in the ‘pandas’ package is used to read the CSV file and convert it into a dataframe (pandas data structure).  
  • Then, certain columns are assigned to variables ‘X’ and ‘y’ respectively, and the MinMaxScaler function is used to fit the data to the model and apply certain transformations on it.  
  • An empty list is created, and the training data is cross-validated 10 time, that is specified by the value ‘n_splits in the KFold’ function.  
  • Using this method, the training and testing is done by splitting up the training dataset 10 times. 
  • After this, the indices of the training and test dataset is printed on the screen.  
  • The first row gives the r squared value. This value helps understand how closely the data has been fit to the line.  
  • The mean of this r squared value is printed in the end.   
  • Instead of using ‘mean’ function, other functions like ‘cross_val_score’ or ‘cross_val_predict’ also can be used.  

Using ‘cross_val_predict 

Here, we are importing the cross_val_predict present in scikit-learn package:  

from sklearn.model_selection import cross_val_predict 
print(“The cross validation prediction is “) 
cross_val_predict(best_svr, X, y, cv=10) 

Output:  

The cross validation prediction is array([25.36718928, 23.06977613, 25.868393  , 26.4326278 , 25.17432617, 
       25.24206729, 21.18313164, 17.3573978 , 12.07022251, 18.5012095 , 
       16.63900232, 20.69694063, 19.29837052, 23.51331985, 22.37909146, 
       23.39547762, 24.4107798 , 19.83066293, 21.54450501, 21.78701492, 
       16.23568134, 20.3075875 , 17.49385015, 16.87740936, 18.90126376, …]) 

The above output displays the cross validation prediction in an array. 

Code explanation 

The ‘cross_val_predict’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation value.  

Using ‘cross_val_score  

Here, we are importing the cross_val_score present in scikit-learn package:

from sklearn.model_selection import cross_val_score 
print(“The cross validation score is “) 
cross_val_predict(best_svr, X, y, cv=10)

Output:

The cross validation score is ([ ….. 21.8481575 , 19.10423341, 21.23362906, 17.88772136, 14.76265616, 
       12.19848284, 17.88647891, 20.5752906 , 21.34495122, 20.42084675, 
       18.14197483, 16.19999662, 20.12750527, 20.81205896, 19.56085546, 
       19.99908337, 22.70619515, 23.04620323, 24.98168712, 24.51166359, 
       23.7297288 ])

Code explanation

The ‘cross_val_score’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation score. 

Variations of cross-validation  

There are variations to cross validations, and they are used in relevant situation. The most commonly used one is the ‘k’-fold cross-validation. Others have been listed below:    

  • Leave one out cross validation 
  • Leave ‘p’ out cross validation  
  • ‘k’ fold cross validation 
  • Holdout method 

Conclusion 

Hence, in this post, we saw how ‘k’ fold cross validation eliminates the need to procure a validation dataset and how a part of the training dataset itself can be used as a validation set, thereby not affecting the separate testing dataset. We also saw the concepts of underfitting and overfitting, and how important it is for the model to fit just-right, with the concept of “Hyperparameter” as well. We saw how the ‘k’ fold cross-validation is implemented in Python using scikit-learn and how it affects the performance of the model.  

KnowledgeHut

KnowledgeHut

Author

KnowledgeHut is an outcome-focused global ed-tech company. We help organizations and professionals unlock excellence through skills development. We offer training solutions under the people and process, data science, full-stack development, cybersecurity, future technologies and digital transformation verticals.
Website : https://www.knowledgehut.com

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

A Peak Into the World of Data Science

Touted as the sexiest job in the 21st century, back in 2012 by Harvard Business Review, the data science world has since received a lot of attention across the entire world, cutting across industries and fields. Many people wonder what the fuss is all about. At the same time, others have been venturing into this field and have found their calling.  Eight years later, the chatter about data science and data scientists continues to garner headlines and conversations. Especially with the current pandemic, suddenly data science is on everyone’s mind. But what does data science encompass? With the current advent of technology, there are terabytes upon terabytes of data that organizations collect daily. From tracking the websites we visit - how long, how often - to what we purchase and where we go - our digital footprint is an immense source of data for a lot of businesses. Between our laptops, smartphones and our tablets - almost everything we do translates into some form of data.  On its own, this raw data will be of no use to anyone. Data science is the process that repackages the data to generate insights and answer business questions for the organization. Using domain understanding, programming and analytical skills coupled together with business sense and know-how, existing data is converted to provide actionable insights for an organization to drive business growth. The processed data is what is worth its weight in gold. By using data science, we can uncover existing insights and behavioural patterns or even predict future trends.  Here is where our highly-sought-after data scientists come in.  A data scientist is a multifaceted role in an organization. They have a wide range of knowledge as they need to marry a plethora of methods, processes and algorithms with computer science, statistics and mathematics to process the data in a format that answers the critical business questions meaningfully and with actionable insights for the organization. With these actionable data, the company can make plans that will be the most profitable to drive their business goals.  To churn out the insights and knowledge that everyone needs these days, data science has become more of a craft than a science despite its name. The data scientists need to be trained in mathematics yet have some creative and business sense to find the answers they are looking in the giant haystack of raw data. They are the ones responsible for helping to shape future business plans and goals.  It sounds like a mighty hefty job, doesn’t it? It is also why it is one of the most sought after jobs these days. The field is rapidly evolving, and keeping up with the latest developments takes a lot of dedication and time, in order to produce actionable data that the organizations can use.  The only constant through this realm of change is the data science project lifecycle. We will discuss briefly below on the critical areas of the project lifecycle. The natural tendency is to envision that it is a circular process immediately - but there will be a lot of working back and forth within some phases to ensure that the project runs smoothly.  Stage One: Business Understanding  As a child, were you one of those children that always asked why? Even when the adults would give you an answer, you followed up with a “why”? Those children will have probably grown up to be data scientists as it seems, their favourite question is: Why? By asking the why - they will get to know the problem that needs to be solved and the critical question will emerge. Once there is a clear understanding of the business problem and question, then the work can begin. Data scientists want to ensure that the insights that come from this question are supported by data and will allow the business to achieve the desired results. Therefore, the foundation stone to any data science project is in understanding the business.  Stage Two: Data Understanding  Once the problem and question have been confirmed, you need to start laying out the objectives of this project by determining the required variables to be predicted. You must know what you need from the data and what the data should address. You must collate all the information and data, which can be reasonably difficult. An agreement over the sources and the requirements of the data characteristics needs to be reached before moving forward.  Through this process, an efficient and insightful understanding is required of how the data can and will be used for the project. This operational management of the data is vital, as the data that is sourced at this stage will define the project and how effective the solutions will be in the end.  Stage Three: Data Preparation  It has been said quite often that a bulk of a data scientist’s time is spent in preparing the data for use. In this report from CrowdFlower in 2016, the percentage of time spent on cleaning and organizing data is pegged at 60%. That is more than half their day!  Since data comes in various forms, and from a multitude of sources, there will be no standardization or consistency throughout the data. Raw data needs to be managed and prepared - with all the incomplete values and attributes fixed, and all deconflicting values in the data eliminated. This process requires human intervention as you must be able to discern which data values are required to reach your end goal. If the data is not prepared according to the business understanding, the final result might not be suitable to address the issue.  Stage Four: Modeling Once the tedious process of preparation is over, it is time to get the results that will be required for this project lifecycle. There are various types of techniques that can be used, ranging from decision-tree building to neural network generation. You must decide which would be the best technique based on the question that needs to be answered. If required, multiple modeling techniques can be used; where each task must be performed individually. Generally, modeling techniques are applied more than once (per process), and there will be more than one technique used per project.  With each technique, parameters must be set based on specific criteria. You, as the data scientist, must apply your knowledge to judge the success of the modeling and rank the models used based on the results; according to pre-set criteria. Stage Five: Evaluation Once the results are churned out and extracted, we then need to refer back to the business query that we talked about in Stage One and decide if it answers the question raised; and if the model and data meet the objectives that the data science project has set out to address. The evaluation also can unveil other results that are not related to the business question but are good points for future direction or challenges that the organization might face. These results should be tabled for discussion and used for new data science projects. Final Stage: Deployment  This is almost the finishing line!  Now with the evaluated results, the team would need to sit down and have an in-depth discussion on what the data shows and what the business needs to do based on the data. The project team should come up with a suitable plan for deployment to address the issue. The deployment will still need to be monitored and assessed along the way to ensure that the project will be a successful one; backed by data.  The assessment would normally restart the project lifecycle; bringing you full circle.  Data is everywhere  In this day and age, we are surrounded by a multitude of data science applications as it crosses all industries. We will focus on these five industries, where data science is making waves. Banking & Finance  Financial institutions were the earliest adopters of data analytics, and they are all about data! From using data for fraud or anomaly detection in their banking transactions to risk analytics and algorithmic trading - one will find data plays a key role in all levels of a financial institution.  Risk analytics is one of the key areas where data science is used; as financial institutions depend on it to make strategic decisions for the financial health of the business. They need to assess each risk to manage and optimize their cost.  Logistics & Transportation  The world of logistics is a complex one. In a production line, raw materials sometimes come from all over the world to create a single product. A delay of any of the parts will affect the production line, and the output of stock will be affected drastically. If logistical delays can be predicted, the company can adjust quickly to another alternative to ensure that there will be no gap in the supply chain, ensuring that the production line will function at optimum efficiency.  Healthcare  2020 has been an interesting one. It has been a battle of a lifetime for many of us. Months have passed, and yet the virus still rages on to wreak havoc on lives and economies. Many countries have turned to data science applications to help with their fight against COVID-19. With so much data generated daily, people and governments need to know various things such as:  Epidemiological clusters so people can be quarantined to stop the spread of the virus tracking of symptoms over thousands of patients to understand how the virus transmits and mutates to find vaccines and  solutions to mitigate transmission. Manufacturing  In this field, millions can be on the line each day as there are so many moving parts that can cause delays, production issues, etc. Data science is primarily used to boost production rates, reduce cost (workforce or energy), predict maintenance and reduce risks on the production floor.  This allows the manufacturer to make plans to ensure that the production line is always operating at the optimum level, providing the best output at any given time.  Retail (Brick & Mortar, Online)  Have you ever wondered why some products in a shop are placed next to each other or how discounts on items work? All those are based on data science.  The retailers track people’s shopping routes, purchases and basket matching to work out details like where products should be placed; or what should go on sale and when to drive up the sales for an item. And that is just for the instore purchases.  Online data tracks what you are buying and suggests what you might want to buy next based on past purchase histories; or even tells you what you might want to add to your cart. That’s how your online supermarket suggests you buy bread if you have a jar of peanut butter already in your cart.  As a data scientist, you must always remember that the power in the data. You need to understand how the data can be used to find the desired results for your organization. The right questions must be asked, and it has become more of an art than a science. Image Source: Data science Life Cycle
Rated 4.0/5 based on 10 customer reviews
9828
A Peak Into the World of Data Science

Touted as the sexiest job in the 21st century, ba... Read More

Activation Functions for Deep Neural Networks

The Universal Approximation Theorem Any predictive model is a mathematical function, y = f(x) that can map the features (x) to the target variable (y). The function, f(x) can be a linear function or it can be a fairly complex nonlinear function. The function, f(x) can help predict with high accuracy depending on the distribution of the data. In the case of neural networks, it would also depend on the type of network architecture that's employed. The Universal Approximation Theorem says that irrespective of what the f(x) is, a neural network model can be built that can approximately deliver the desired result. In order to build a proper neural network architecture, let us take a look at the activation functions. What are Activation Functions? Simply put, activation functions define the output of neurons given a certain set of inputs. Activation functions are mathematical functions that are added to neural network models to enable the models to learn complex patterns. An activation function takes in the output from the previous layer, passes it through the mathematical function to convert it into some form, that can be considered as an input for the next computation layer. Activation functions determine the final accuracy of a network model while also contributing to the computational efficiency of building the model. Why do we need Activation Functions? In a neural network, if we add the hidden layers as the weighted sum of the inputs, this would translate into a linear function which is equivalent to a linear regression model. Image source: Neural Network ArchitectureIn the above diagram, we see the hidden layer is simply the weighted sum of the inputs from the input layer. For example, b1 = bw1 + a1w1 + a2w3 which is nothing but a linear function.  Multi-layer neural network models can classify linearly inseparable classes. However, in order to do so, we need the network to be transformed to a nonlinear function. For this nonlinear transformation to happen, we would pass the weighted sum of the inputs through an activation function. These activation functions are nonlinear functions which are applied at the hidden layers. Each hidden layer can have different activation functions, though mostly all neurons in each layer will have the same activation function. Types of Activation Functions? In this section we discuss the following: Linear Function Threshold Activation Function Bipolar Activation Function Logistic Sigmoid Function Bipolar Sigmoid Function Hyperbolic Tangent Function Rectified Linear Unit Function Swish Function (proposed by Google Brain - a deep learning artificial intelligence research team at Google) Linear Function: A linear function is similar to a straight line, y=mx. Irrespective of the number of hidden layers, if all the layers are linear in nature, then the final output is also simply a linear function of the input values. Hence we take a look at the other activation functions which are non-linear in nature and can help learn complex patterns. Threshold Activation Function: In this case, if the input is above a certain value, the neuron is activated. However, it is to note that this function provides either a 1 or a 0 as the output. In other words, if we need to classify certain inputs into more than 2 categories, a Threshold-Activation function is not a suitable one. Because of its binary output nature, this function is also known as binary-step activation function.Threshold Activation FunctionBipolar Activation Function: This is similar to the threshold function that was explained above. However, this activation function will return an output of either -1 or +1 based on a threshold.Bipolar Activation FunctionLogistic Sigmoid Function: One of the most frequently used activation functions is the Logistic Sigmoid Function. Its output ranges between 0 and 1 and is plotted as an ‘S’ shaped graph.Logistic Sigmoid FunctionThis is a nonlinear function and is characterised by a small change in x that would lead to large change in y. This activation function is generally used for binary classification where the expected output is 0 or 1. This activation function provides an output between 0 and 1 and a default threshold of 0.5 is considered to convert the continuous output to 0 or 1 for classifying the observationsAnother variation of the Logistic Sigmoid function is the Bipolar Sigmoid Function. This activation function is a rescaled version of the Logistic Sigmoid Function which provides an output in the range of -1 to +1.Bipolar Logistic FunctionHyperbolic Tangent Function: This activation function is quite similar to the sigmoid function. Its output ranges between -1 to +1.Hyperbolic Tangent FunctionRectified Linear Activation Function: This activation function, also known as ReLU, outputs the input if it is positive, else will return zero. That is to say, if the input is zero or less, this function will return 0 or will return the input itself. This function mostly behaves like a linear function because of which the computational simplicity is achieved.This activation function has become quite popular and is often used because of its computational efficiency compared to sigmoid and the hyperbolic tangent function that helps the model converge faster.  Another critical point to note is that while the sigmoid & the hyperbolic tangent function tries to approximate a zero value, the Rectified Linear Activation Functions can return true zero.Rectified Linear Units Activation FunctionOne disadvantage of ReLU is that when the inputs are close to zero or negative, the gradient of the function becomes zero. This causes a problem for the algorithm while performing back-propagation and in turn the model cannot converge. This is commonly termed as the “Dying” ReLU problem. There are a few variations of the ReLU activation function, such as, Noisy ReLU, Leaky ReLU, Parametric ReLU and Exponential Linear Units (ELU) Leaky ReLU which is a modified version of ReLU, helps solve the “Dying” ReLU problem. It helps perform back-propagation even when the inputs are negative. Leaky ReLU, unlike ReLU, defines a small linear component of x when x is a negative value. With this change in leaky ReLU, the gradient can be of non-zero value instead of zero thus avoiding dead neurons. However, this might also bring in a challenge with Leaky ReLU when it comes to predicting negative values.  Exponential Linear Unit (ELU) is another variant of ReLU, which unlike ReLU and leaky ReLU, uses a log curve instead of a straight line to define the negative values. Swish Activation Function: Swish is a new activation function that has been proposed by Google Brain. While ReLU returns zero for negative values, Swish doesn’t return a zero for negative inputs. Swish is a self-gating technique which implies that while normal gates require multiple scalar inputs, self-gating technique requires a single input only. Swish has certain properties - Unlike ReLU, Swish is a smooth and non-monotonic function which makes it more acceptable compared to ReLU. Swish is unbounded above and bounded below.  Swish is represented as x · σ(βx), where σ(z) = (1 + exp(−z))−1 is the sigmoid function and β is a constant or a trainable parameter.  Activation functions in deep learning and the vanishing gradient descent problem Gradient based methods are used by various algorithms to train the models. Neural networks algorithm uses stochastic gradient descent method to train the model. A neural network algorithm randomly assigns weights to the layers and once the output is predicted, it calculates the prediction errors. It uses these errors to estimate a gradient that can be used to update the weights in the network. This is done in order to reduce the prediction errors. The error gradient is updated backward from the output layer to the input layer.  It is preferred to build a neural network model with a larger number of hidden layers. With more hidden layers, the neural network model can achieve enhanced capability to perform more accurately.  One problem with too many layers is that the gradient diminishes pretty fast as it moves from the output layer to the input layer, i.e. during the back propagation. By the time it reaches the other end backward, it is quite possible that the error might get too small to make any effect on the model performance improvement. Basically, this is a situation where some difficulty is faced while training a neural network model using gradient based methods.  This is known as the vanishing gradient descent problem. Gradient based methods might face this challenge when certain activation functions are used in the network.  In deep neural networks, various activations functions are used. However when training deep neural network models, the vanishing gradient descent problems can demonstrate unstable behavior.  Various workaround solutions have been proposed to solve this problem. The most commonly used activation function is the ReLU activation function that has proven to perform way better than any other previously existing activation functions like sigmoid or hyperbolic tangent. As mentioned above, Swish improves upon ReLU being a smooth and non-monotonic function. However, though the vanishing gradient descent problem is much less severe in Swish, it does not completely avoid the vanishing gradient descent problem. To tackle this problem, a new activation function has been proposed. “The activation function in the neural network is one of the important aspects which facilitates the deep training by introducing the nonlinearity into the learning process. However, because of zero-hard rectification, some of the existing activation functions such as ReLU and Swish miss to utilize the large negative input values and may suffer from the dying gradient problem. Thus, it is important to look for a better activation function which is free from such problems.... The proposed LiSHT activation function is an attempt to scale the non-linear Hyperbolic Tangent (Tanh) function by a linear function and tackle the dying gradient problem… A very promising performance improvement is observed on three different types of neural networks including Multi-layer Perceptron (MLP), Convolutional Neural Network (CNN) and Recurrent Neural Network like Long-short term memory (LSTM).“   - Swalpa Kumar Roy, Suvojit Manna, et al, Jan 2019 In a paper published here, Swalpa Kumar Roy, Suvojit Manna, et al proposes a new non-parametric activation function - the Linearly Scaled Hyperbolic Tangent (LiSHT) - for Neural Networks that attempts to tackle the vanishing gradient descent problem. 
Rated 4.0/5 based on 15 customer reviews
8480
Activation Functions for Deep Neural Networks

The Universal Approximation Theorem Any predictiv... Read More

Data Science: Correlation vs Regression in Statistics

In this article, we will understand the key differences between correlation and regression, and their significance. Correlation and regression are two different types of analyses that are performed on multi-variate distributions of data. They are mathematical concepts that help in understanding the extent of the relation between two variables: and the nature of the relationship between the two variables respectively. Correlation Correlation, as the name suggests is a word formed by combining ‘co’ and ‘relation’. It refers to the analysis of the relationship that is established between two variables in a given dataset. It helps in understanding (or measuring) the linear relationship between two variables.  Two variables are said to be correlated when a change in the value of one variable results in a corresponding change in the value of the other variable. This could be a direct or an indirect change in the value of variables. This indicates a relationship between both the variables.  Correlation is a statistical measure that deals with the strength of the relation between the two variables in question.  Correlation can be a positive or negative value. Positive Correlation Two variables are considered to be positively correlated when the value of one variable increases or decreases following an increase or decrease in the value of the other variable respectively.  Let us understand this better with the help of an example: Suppose you start saving your money in a bank, and they offer some amount of interest on the amount you save in the bank. The more the amount you store in the bank, the more interest you get on your money. This way, the money stored in a bank and the interest obtained on it are positively correlated. Let us take another example: While investing in stocks, it is usually said that higher the risk while investing in a stock, higher is the rate of returns on such stocks.  This shows a direct inverse relationship between the two variables since both of them increase/decrease when the other variable increases/decreases respectively. Negative Correlation Two variables are considered to be negatively correlated when the value of one variable increases following a decrease in the value of the other variable. Let us understand this with an example: Suppose a person is looking to lose weight. The one basic idea behind weight loss is reducing the number of calorie intake. When fewer calories are consumed and a significant number of calories are burnt, the rate of weight loss is quicker. This means when the amount of junk food eaten is decreased, weight loss increases. Let us take another example: Suppose a popular non-essential product that is being sold faces an increase in the price. When this happens, the number of people who purchase it will reduce and the demand would also reduce. This means, when the popularity and price of the product increases, the demand for the product reduces. An inverse proportion relationship is observed between the two variables since one value increases and the other value decreases or one value decreases and the other value increases.  Zero Correlation This indicates that there is no relationship between two variables. It is also known as a zero correlation. This is when a change in one variable doesn't affect the other variable in any way. Let us understand this with the help of an example: When the increase in height of our friend/neighbour doesn’t affect our height, since our height is independent of our friend’s height.  Correlation is used when there is a requirement to see if the two variables that are being worked upon are related to each other, and if they are, what the extent of this relationship is, and whether the values are positively or negatively correlated.  Pearson’s correlation coefficient is a popular measure to understand the correlation between two values.  Regression Regression is the type of analysis that helps in the prediction of a dependant value when the value of the independent variable is given. For example, given a dataset that contains two variables (or columns, if visualized as a table), a few rows of values for both the variables would be given. One or more of one of the variables (or column) would be missing, that needs to be found out. One of the variables would depend on the other, thereby forming an equation that relevantly represents the relationship between the two variables. Regression helps in predicting the missing value. Note: The idea behind any regression technique is to ensure that the difference between the predicted and the actual value is minimal, thereby reducing the error that occurs during the prediction of the dependent variable with the help of the independent variable. There are different types of regression and some of them have been listed below: Linear Regression This is one of the basic kinds of regression, which usually involves two variables, where one variable is known as the ‘dependent’ variable and the other one is known as an ‘independent’ variable. Given a dataset, a pattern has to be formed (linear equation) with the help of these two variables and this equation has to be used to fit the given data to a straight line. This straight-line needs to be used to predict the value for a given variable. The predicted values are usually continuous. Logistic Regression There are different types of logistic regression:  Binary logistic regression is a regression technique wherein there are only two types or categories of input that are possible, i.e 0 or 1, yes or no, true or false and so on. Multinomial logistic regression helps predict output wherein the outcome would belong to one of the more than two classes or categories. In other words, this algorithm is used to predict a nominal dependent variable. Ordinal logistic regression deals with dependant variables that need to be ranked while predicting it with the help of independent variables.  Ridge Regression It is also known as L2 regularization. It is a regression technique that helps in finding the best coefficients for a linear regression model with the help of an estimator that is known as ridge estimator. It is used in contrast to the popular ordinary least square method since the former has low variance and hence it calculates better coefficients. It doesn’t eliminate coefficients thereby not producing sparse, simple models.  Lasso Regression LASSO is an acronym that stands for ‘Least Absolute Shrinkage and Selection Operator’. It is a type of linear regression that uses the concept of ‘shrinkage’. Shrinkage is a process with the help of which values in a data set are reduced/shrunk to a certain base point (this could be mean, median, etc). It helps in creating simple, easy to understand, sparse models, i.e the models that have fewer parameters to deal with, thereby being simple.  Lasso regression is highly suited for models that have high collinearity levels, i.e a model where certain processes (such as model selection or parameter selection or variable selection) is automated.  It is used to perform L1 and L2 regularization. L1 regularization is a technique that adds a penalty to the given values of coefficients in the equation. This also results in simple, easy to use, sparse models that would contain lesser coefficients. Some of these coefficients can also be estimated off to 0 and hence eliminated from the model altogether. This way, the model becomes simple.  It is said that Lasso regression is easier to work with and understand in comparison to ridge regression.  There are significant differences between both these statistical concepts.  Difference between Correlation and Regression Let us summarize the difference between correlation and regression with the help of a table: CorrelationRegressionThere are two variables, and their relationship is understood and measured.Two variables are represented as 'dependent' and 'independent' variables, and the dependent variable is predicted.The relationship between the two variables is analysed.This concept tells about how one variable affects the other and tries to predict the dependant variable.The relationship between two variables (say ‘x’ and ‘y’) is the same if it is expressed as ‘x is related to y’ or ‘y is related to x’.There is a significant difference when we say ‘x depends on y’ and ‘y depends on x’. This is because the independent and dependent variables change.Correlation between two variables can be expressed through a single point on a graph, visually.A line or a curve is fitted to the given data, and the line or the curve is extrapolated to predict the data and make sure the line or the curve fits the data on the graph.It is a numerical value that tells about the strength of the relation between two variables.It predicts one variable based on the independent variables. (this predicted value can be continuous or discrete, depending on the type of regression) by fitting a straight line to the data.Conclusion In this article, we understood the significant differences between two statistical techniques, namely- correlation and regression with the help of examples. Correlation establishes a relationship between two variables whereas regression deals with the prediction of values and curve fitting. 
Rated 4.0/5 based on 14 customer reviews
9827
Data Science: Correlation vs Regression in Statist...

In this article, we will understand the key differ... Read More