Machine learning metrics are used to understand how well the machine learning model performed on the input data that was supplied to it. This way, the performance of the model can be improved by tuning the hyper parameters or tweaking features of the input dataset. The main goal of a learning model is to generalize well on never seen before data. Performance metrics help in determining how well the model generalizes on new data.
Specific metrics need to be used on specific learning models, and not all metrics can be used on a single model. Just a specific metric or a set of metrics can be taken as point of reference and improved upon.
It talks about the part of predictions which a classification model predicted correctly. When a model classifies the given data items into two classes, it is known as binary classification. It can be defined as:
Accuracy = (True positives +True negatives)/(Total number of data items)
When classification models classify the data items into multiple classes, it is known as multi-class classification. It can be defined as:
Accuracy = correctly predicted number of data items/ total number of data items
It tells about the number of data points (predictions) which were correctly predicted as true that were actually true/correct:
Precision = (True positive) / (True Positive + False Positive)
It tells about the number of data points (predictions) which were actually relevant in a dataset:
Recall = (True positive) / (True Positive + False Negative)
It is used to understand the difference between the average of predicted values and actual values in the training data.
It is the result of dividing the square of losses and the total number of examples in the training dataset.
It is the harmonic mean of the precision and recall values. It is used to measure the accuracy of test dataset:
F1 = 2 * (1/ ((1/precision) + (1/recall))
It is defined with respect to binary classification problem. It is used to find the area under the ROC curve.
ROC refers to Receiver Operating Characteristic Curve, which is a visual way of determining the binary classifier’s performance.
It is the ratio of True positive rate (also known as recall) and False positive rate:
ROC curve = True positive rate/False positive rate
True positive rate, which is also known as sensitivity or recall can be defined as the ratio of truepositives and sum of true positives and false negatives:
TPR = True positives/ (True positives + False negatives)
True negative rate, which is also known as specificity or selectivity can be defined as the ratio oftrue negatives and sum of true negatives and false positives:
TNR = True negatives/ (True negatives + False positives)
False positive rate, which is also known as fall-out can be defined as the ratio of false positivesand sum of false positives and true negatives:
FPR = False positives / (False positives + True negatives)
False negative rate, which is also known as miss rate can be defined as the ratio of falsenegatives and sum of false negatives and true positives
FNR = False negatives / (False negatives + True positives)
In this post, we understood about the various performance metrics which are used by machine learning algorithms. The goal of these machine learning algorithms is to improve the value of certain performance metrics so that prediction values are good.