2022-05-24

Classification Metrics

Introduction

Performance metrics are essential for evaluating and comparing machine learning models. They provide an objective way to determine the effectiveness of a model in predicting outcomes based on input data. These metrics not only allow us to identify the strengths and weaknesses of different algorithms but also guide us in choosing the most suitable model for a specific task. Furthermore, performance metrics help in model selection, hyperparameter tuning, and diagnosing potential issues in the training process.

Machine learning problems can be broadly classified into two categories: regression and classification. Regression problems involve predicting continuous values, while classification problems involve predicting discrete labels or categories.

The performance metrics for regression and classification problems differ because of the nature of their respective predictions. Regression metrics focus on the difference between the predicted and actual values, while classification metrics assess how well the model can correctly classify the input data into predefined categories.

In this article, I will show the common performance metrics for classification problem.

Classification Metrics

Classification problems involve predicting discrete labels or categories based on input data. In this chapter, I will discuss the most commonly used performance metrics for classification tasks and how they can help evaluate the effectiveness of machine learning models.

Confusion Matrix

A confusion matrix is a table that summarizes the performance of a classification model by comparing its predicted labels with the actual labels. The rows of the matrix represent the actual classes, and the columns represent the predicted classes. The four main elements of a binary confusion matrix are:

  • True Positives (TP)
    correctly predicted positive instances
  • True Negatives (TN)
    correctly predicted negative instances
  • False Positives (FP)
    negative instances incorrectly predicted as positive
  • False Negatives (FN)
    positive instances incorrectly predicted as negative

Accuracy

Accuracy is the proportion of correctly classified instances out of the total instances. It is a widely used metric for classification problems, but it can be misleading when the data is imbalanced. The equation for accuracy is:

Accuracy = \frac{TP + TN}{TP + TN + FP + FN}

Precision

Precision, also known as positive predictive value, measures the proportion of true positive instances among the instances predicted as positive. It indicates the model's ability to correctly identify positive instances. The equation for precision is:

Precision = \frac{TP}{TP + FP}

Recall

Recall, also known as sensitivity or true positive rate, measures the proportion of true positive instances among the actual positive instances. It indicates the model's ability to identify all the positive instances. The equation for recall is:

Recall = \frac{TP}{TP + FN}

F1 Score

The F1 score is the harmonic mean of precision and recall, providing a balance between them. It is particularly useful when dealing with imbalanced datasets. The equation for the F1 score is:

F1 = 2 \times \frac{Precision \times Recall}{Precision + Recall}

ROC-AUC

The ROC curve is a graphical representation of the trade-off between the true positive rate (sensitivity) and the false positive rate (1-specificity) at various classification thresholds. The AUC-ROC score measures the overall performance of the classifier, with higher values indicating better performance. The AUC-ROC score ranges from 0 to 1, with 0.5 representing a random classifier.

PR-AUC

The PR (Precision-Recall) curve is a graphical representation of the trade-off between precision and recall at various classification thresholds. The AUC-PR score measures the overall performance of the classifier when the data is imbalanced or when false positives and false negatives have different costs. Like the AUC-ROC, the AUC-PR score ranges from 0 to 1, with higher values indicating better performance.

Matthews Correlation Coefficient (MCC)

The MCC is a balanced measure that takes into account true and false positives and negatives, providing an overall assessment of the classification model. The MCC ranges from -1 to 1, with higher absolute values indicating better performance. The equation for the MCC is:

MCC = \frac{(TP \times TN) - (FP \times FN)}{\sqrt{(TP + FP)(TP + FN)(TN + FP)(TN + FN)}}

Cohen's Kappa

Cohen's Kappa is a measure of agreement between the predicted and actual labels, taking into account the agreement that could occur by chance. It ranges from -1 to 1, with higher values indicating better agreement between the model's predictions and the actual labels. The equation for Cohen's Kappa is:

Kappa = \frac{p_o - p_e}{1 - p_e}

where p_o represents the observed agreement (accuracy), and p_e represents the expected agreement by chance.

Multi-Class Classification Metrics

Multi-class classification problems involve predicting one of multiple discrete labels or categories based on input data. In this chapter, I will discuss the most commonly used performance metrics for multi-class classification tasks and how they can help evaluate the effectiveness of machine learning models.

Micro-Averaging

Micro-averaging is a technique for aggregating the performance of a multi-class classifier across all classes by first computing the sum of true positives, false positives, and false negatives for each class, and then calculating the metrics using these sums. This method gives equal weight to each instance, making it suitable for imbalanced datasets. The equations for micro-averaged precision, recall, and F1 score are:

Precision_{micro} = \frac{\sum_{i=1}^{k} TP_i}{\sum_{i=1}^{k} (TP_i + FP_i)}
Recall_{micro} = \frac{\sum_{i=1}^{k} TP_i}{\sum_{i=1}^{k} (TP_i + FN_i)}
F1_{micro} = 2 \times \frac{Precision_{micro} \times Recall_{micro}}{Precision_{micro} + Recall_{micro}}

where k is the number of classes, and TP_i, FP_i, and FN_i are the true positives, false positives, and false negatives for class i, respectively.

Macro-Averaging

Macro-averaging is another technique for aggregating the performance of a multi-class classifier across all classes by first computing the metrics for each class separately, and then calculating the average of these metrics. This method gives equal weight to each class, making it suitable for balanced datasets. The equations for macro-averaged precision, recall, and F1 score are:

Precision_{macro} = \frac{1}{k} \sum_{i=1}^{k} \frac{TP_i}{TP_i + FP_i}
Recall_{macro} = \frac{1}{k} \sum_{i=1}^{k} \frac{TP_i}{TP_i + FN_i}
F1_{macro} = \frac{1}{k} \sum_{i=1}^{k} 2 \times \frac{Precision_i \times Recall_i}{Precision_i + Recall_i}

Weighted Averaging

Weighted averaging is a technique for aggregating the performance of a multi-class classifier across all classes by first computing the metrics for each class separately, and then calculating the weighted average of these metrics based on the number of instances in each class. This method accounts for class imbalance. The equations for weighted-averaged precision, recall, and F1 score are:

Precision_{weighted} = \frac{1}{n} \sum_{i=1}^{k} (n_i \times \frac{TP_i}{TP_i + FP_i})
Recall_{weighted} = \frac{1}{n} \sum_{i=1}^{k} (n_i \times \frac{TP_i}{TP_i + FN_i})
F1_{weighted} = \frac{1}{n} \sum_{i=1}^{k} (n_i \times 2 \times \frac{Precision_i \times Recall_i}{Precision_i + Recall_i})

where n is the total number of instances, and n_i is the number of instances in class i.

References

https://neptune.ai/blog/performance-metrics-in-machine-learning-complete-guide
https://www.javatpoint.com/performance-metrics-in-machine-learning
https://towardsdatascience.com/performance-metrics-in-machine-learning-part-1-classification-6c6b8d8a8c92
https://www.altexsoft.com/blog/machine-learning-metrics/

Ryusei Kakujo

researchgatelinkedingithub

Focusing on data science for mobility

Bench Press 100kg!