Precision-Recall Curves: How to Easily Evaluate Machine Learning Models in No Time

This article was first published on python – Better Data Science , and kindly contributed to python-bloggers. (You can report issue about the content on this page here)
Want to share your content on python-bloggers? click here.

Precision-Recall curves are a great way to visualize how your model predicts the positive class. You’ll learn it in-depth, and also go through hands-on examples in this article.

As the name suggests, you can use precision-recall curves to visualize the relationship between precision and recall. This relationship is visualized for different probability thresholds, mostly between a couple of different models.

A perfect model is shown at the point (1, 1), indicating perfect scores for both precision and recall. You’ll usually end up with a model that bows towards the mentioned point but isn’t quite there.

Here’s what the article covers:

Reviewing Confusion matrix, Precision, and Recall

Before diving deep into precision, recall, and their relationship, let’s make a quick refresher on the confusion matrix. Here’s it’s most general version:

Image 1 - Confusion matrix (image by author)

Image 1 – Confusion matrix (image by author)

That’s great, but let’s make it a bit less abstract by putting actual values:

Image 2 - Confusion matrix with real data (image by author)

Image 2 – Confusion matrix with real data (image by author)

You can calculate dozens of different metrics from here, precision and recall being two of them. 

Precision

Precision is a metric that shows the number of correct positive predictions. It is calculated as the number of true positives divided by the sum of true positives and false positives:

Image 3 - Precision formula (image by author)

Image 3 – Precision formula (image by author)

Two terms to clarify:

  • True positive – an instance that was positive and classified as positive (good wine classified as a good wine)
  • False positive – an instance that is negative but classified as positive (bad wine classified as good)

You can now easily calculate the precision score from the confusion matrix shown in Image 2. Here’s the procedure:

Image 4 - Precision calculation (image by author)

Image 4 – Precision calculation (image by author)

The value can range between 0 and 1 (higher is better) for precision and recall, so 0.84 isn’t too bad. 

High precision value means your model doesn’t produce a lot of false positives.

Recall

Recall is the most useful metric for many classification problems. It reports the number of correct predictions for the positive class made out of all positive class predictions. You can calculate it with the following formula:

Image 5 - Recall formula (image by author)

Image 5 – Recall formula (image by author)

Two terms to clarify:

  • True positive – an instance that was positive and classified as positive (good wine classified as a good wine)
  • False negative – an instance that was positive but classified as negative (good wine classified as bad)

Sure, it’s all fun and games when classifying wines, but the cost of misclassification can be expressed in human lives: a patient has cancer, but the doctor says he doesn’t. Same principle as with wines, but much more costly.

You can calculate the recall score from the formula mentioned above. Here’s a complete walkthrough:

Image 6 - Recall calculation (image by author)

Image 6 – Recall calculation (image by author)

Just as precision, recall also ranges between 0 and 1 (higher is better). 0.61 isn’t that great.

Low recall value means your model produces a lot of false negatives.

You now know how both of these metrics work independently. Let’s connect them to a single visualization next.

Dataset loading and preparation

You’ll use the White wine quality dataset for the practical part. Here’s how to load it with Python:

Here’s how the first couple of rows look like:

Image 7 - White wine dataset head (image by author)

Image 7 – White wine dataset head (image by author)

As you can see from the quality column, this is not a binary classification problem – so you’ll turn it into one. Let’s say the wine is Good if the quality is 7 or above, and Bad otherwise:

Next, let’s visualize the target variable distribution. Here’s the code:

And here’s the visualization:

Image 8 - Class distribution of the target variable (image by author)

Image 8 – Class distribution of the target variable (image by author)

Roughly a 4:1 ratio, indicating a skew in the target variable. There are many more bad wines, meaning the model will learn to classify bad wines better. You could use oversampling/undersampling techniques to overcome this issue, but it’s beyond the scope for today.

You can make a train/test split next:

And that’s it! You’ll train a couple of models and visualize precision-recall curves next.

Comparing Precision-Recall curves

The snippet below shows you how to train logistic regression, decision tree, random forests, and extreme gradient boosting models. It also shows you how to grab probabilities for the positive class:

You can obtain the values for precision, recall, and AUC (Area Under the Curve) for every model next. The only requirement is to remap the Good and Bad class names to 1 and 0, respectively:

Finally, you can visualize precision-recall curves:

Here’s the corresponding visualization:

Image 9 - Precision-Recall curves for different machine learning models (image by author)

Image 9 – Precision-Recall curves for different machine learning models (image by author)

As you can see, none of the curves stretch up to (1, 1) point, but that’s expected. The AUC value is an excellent metric for comparing different models (higher is better). Random forests algorithm did best on this dataset, with an AUC score of 0.83.

Conclusion

To summarize, you should visualize precision-recall curves any time you want to visualize the tradeoff between false positives and false negatives. A high number of false positives leads to low precision, and a high number of false negatives leads to low recall.

You should aim for high-precision and high-recall models, but in reality, one metric is more important, so you can always optimize for it. After optimization, adjust the classification threshold accordingly.

What’s your approach to model selection? Let me know in the comment section.

Learn more 

Join my private email list for more helpful insights.

The post Precision-Recall Curves: How to Easily Evaluate Machine Learning Models in No Time appeared first on Better Data Science.

To leave a comment for the author, please follow the link and comment on their blog: python – Better Data Science .

Want to share your content on python-bloggers? click here.