Data Science

Machine learning & data science for beginners and experts alike.
joshuaburkhow
ACE Emeritus
ACE Emeritus

Source: geo.tvSource: geo.tv

 

 

 

In this post, I am going to help you understand what a confusion matrix is, and also show you where and how you can implement this powerful tool in Alteryx. Now because this topic has confusion in the title before you do anything, you must be thoroughly confused. So read this:
 

“One morning I shot an elephant in my pajamas. How he got into my pajamas I’ll never know.” - Groucho Marx

Are you confused now? Great! Now, let’s get started….

 

Seriously before we learn about what a confusion matrix ACTUALLY is I need you to understand that the only real thing that is confusing is why they named it a confusion matrix!?! If you trust me, I promise I’ll show you it’s not at all confusing, it’s a very useful and - in my small and unimpressive brain - it’s actually a simple and straightforward tool with a lot of power that can give you useful information about how your predictive model is performing. The key here is knowing what it’s used for and what it tells you. 

What is a Confusion Matrix?

 

A confusion matrix at its simplest form is simply a table of 4 values (in a binary classification model) that stems from the comparison of two things:

 

  1. What your model predicted to be True vs False
  2. What ACTUALLY was True or False

 

These 4 values that come out of the cross-section aka matrix are important in that they tell you these four specific things:

 

  1. True Positives (TP) - Your model predicted that the image was of Keanu Reeves AND the image is actually Keanu Reeves.
  2. True Negatives (TN) - Your model predicted that the image was not Laurence Fishburne AND the image is actually truly NOT Laurence Fishburne, it’s Samuel Jackson. Sidenote: This actually happens all the time to Sam Jackson in real life.
  3. False Positives (FP) - Your model predicted that the image was of Keanu Reeves but it was wrong, it’s just a 40lb cat that happens to look like Keanu.
  4. False Negatives (FN) - Your model predicted that the image was NOT an image Keanu Reeves but it actually truly was Keanu. No, really it was.

 

The whole reason why it’s called a “confusion” matrix … wait for it... is because from 2 of the 4 values (True Negatives and False Positives) we can see how the model is “confusing” or mislabeling their predictions, i.e. getting them wrong! Saying it’s Keanu when it’s not or saying it’s NOT Keanu when in reality it is. When you are building a model and see those 2 values get lower (and in turn, TPs and FNs get bigger) you’ll know that your model is getting better! We want to be able to mark every image as Keanu when it really is Keanu and say that it’s not when it really isn’t. Easy right?

 

What does a Confusion Matrix look like?

If you google confusion matrix you’ll find something like this:


 

Source: medium.comSource: medium.com

 

 

 

 

 

 

All this table shows is the same four values I explained above: True Positives, False Positives, False Negatives, and True Negatives, very simple. When you are new to this you can get a little confused yourself (pun intended for sure!) as there are numerous variations of this table like:

 

  • Actual Values vs Predicted Values reversed on the axis
  • Other calculations added to the matrix
  • Referred to as an “Error Matrix”
  • All 4 values places in a single row
  • The values (ie False Positives) or their abbreviations (ie FP or TP) without any table or matrix.

 

It’s all the same! It’s the values of those that we care about.

 

Now having only referred to the simplest form of a Confusion Matrix it doesn’t mean that the Confusion Matrix is restricted only to a 4 value matrix. Meaning if you are working with a model other than a binary (ie yes or no) classification you might be classifying flower types (which because it’s 3 or more classes, we call a multinomial classification). That would look something like this:

 

CristonS_2-1583420939961.png




 

When would I use a Confusion Matrix?

 

There are two main reasons for you to look at a confusion matrix.

 

  • When you want to assess the performance of a single classification model you are building. If you, for example, built a model to classify images as to if they are dogs or not or you wanted to predict whether a person would make a purchase or not; you’ll want a confusion matrix to tell you how many of those predictions were True Positive, True Negative, False Positive, and False Negative.

 

  • When you want to compare the performance of multiple models together. By comparing the results of each matrix to each other you can determine which model is performing better.

 

Since we are all interested in how we’d use this in the context of Alteryx, let’s take a look at the confusion matrix within Alteryx.


Where can I find the Confusion Matrix in Alteryx?

 

You have many opportunities to view a Confusion Matrix within a workflow. As a simple example, you see here in a Forest Model you can add a Browse tool to the “R” output (aka Report) and view the Confusion Matrix in the report.


CristonS_3-1583420939968.png

 

 

You could also use the model comparison tool that is found here to compare many models together as shown in this example.



 

CristonS_4-1583420939975.png



 

If you would like to learn more about the Model Comparison Tool you can check the help article.



Just Scratching the Surface

We have covered the basics of one way to start to understand the performance of a model, but we need to be clear this is just the beginning. The values that form the confusion matrix are a foundational component of getting many other more involved ratios that give us deeper insight into the overall performance of a model. These ratios/metrics that I will talk about in a future post will give us better criteria to make choices around whether the model we built is good enough for using in whatever situation we want to make predictions in.

 

It happens all the time, wherein the process of building a model and seeing the performance we end up deciding for one reason or another that the model is not worth using. It could be that we don’t have enough data, the data quality is not reliable, it could be we aren’t looking at the right predictors, or a hundred other reasons. Not all models are useful.

 

I hope this post is useful and will help to get you started - especially to those new to predictive modeling in Alteryx. Please leave comments below or let me know if there is something you’d love to better understand around Alteryx!

 

 

 

Comments