This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I'm also seeing the same problem. I use the Model comparison tool to evaluate performance (tip from another Alteryx community user, gotta give credit where its due). You can download it from the gallery.
One thing to note is that you will need to right click on your predictive macro (this applies to Linear Regression, Decision Trees, Logistic Regression) and choose version 1.0 in order to take advantage of the Model comparison tool. You don't need to do this with Boosted (I haven't tried any others yet).
But the results between the interactive output in decision trees and the Model comparison are striking. In my case, the Interactive output showed 81% accuracy while the Model comparison showed 60%.
You aren't missing anything, the calculations they are displaying on their summary are complete incorrect except for Accuracy.
They are posting the True Negative % as the Recall, and what appears to be the Specificity as the Precision. Their F-Measure is also incorrect due to them getting both the Recall and Precision wrong. I suggest only looking at the confusion matrix (Misclassifications section) and doing the calculations manually until they address this issue.
After some more testing, the Model Comparison tool automatically sets the scoring threshold at 0.5 (for binary classification). Since you didn't specify which you were running, this could have something to do with the accuracy difference you are seeing.
I used it for Binary Classification and used the default of .5...
When I build the confusion matrix from scratch using the formula tool and compare it back to the model comparison it was off...but this was also a few weeks back. I want to say that in the past few days, the accuracy may have improved. Not sure if an update was made to the macro recently.