AUC value in ROC Chart
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hi Experts,
When I tried to use Logistics Regression in Alteryx, there is a result to show ROC Chart.
Since I require to get AUC (Area Under Curve) to know the model performance, may I know how to get the AUC value in the ROC Chart?
In R, it's very simple to get that value. Below please find the R Code and attached graph for your reference:
mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
mylogit <- glm(admit ~ gre, data = mydata, family = "binomial")
summary(mylogit)
prob=predict(mylogit,type=c("response"))
mydata$prob=prob
require(pROC)
g<-roc(admit ~ prob,data=mydata)
plot(g,print.auc=T)
Appreciate much for your advice! Many thanks.
Regards,
Derek
Solved! Go to Solution.
- Labels:
- Output
- Predictive Analysis
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
I'm also interested in how best to do this. It does seem like the AuROC should be included in the interactive report output of the logistic regression tool (either on the summary or on the ROC chart itself).
It is shown when using the model comparison tool, but that tool doesn't seem to work with the new version of the logistic regression tool (i.e., V1.1). You could obviously calculate this using the 'R' tool itself, but I'm sure you were looking for a easier solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
There is a known issue where the Model Comparison tool does not currently work with the Linear Regression, Logistic Regression, and Decision Tree tool versions released with Alteryx 11.0.
As a workaround, change the tool version to 1.0:
- Right-click a Linear Regression, Logistic Regression, or Decision Tree tool.
- Point to Choose Tool Version and select version 1.0.
Senior Solutions Architect
Alteryx, Inc.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Does Model Comparison tool include AUC value in ROC chart if logistic regression model v1.0 is applied?
If yes, I would like to try it out. Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
@derekt - yes, the AUC value is included in the output of the Model Comparison tool
Senior Solutions Architect
Alteryx, Inc.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Many thanks, Sophia~
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Any idea why I would get the following error: "Error: Model Comparison (5): Tool #3: Error in c_names[apply(all_proba, 1, which.max)] : "?
Simple logistic regression, I have the model (O anchor of Logistic Tool) connected to the M anchor on the Model Comparison tool, and I have the V anchor of the Create Samples Tool going to the D anchor of the Models Comparison. I left the positive class option blank in the Model Comparison tool. Can't figure out why it's not working. I'm still on 10.5 so that's not the issue either.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
FYI that I'm using Alteryx v11.3 and logistic regression v1.1 for this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
I am seeing the same error as mbarone. Has there been a solution for this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Jamie - after much research and playing around and digging............for me, this happened to be (ugh, hate to say it) user error.
What was happening is this.....
In the data set I used to build the model, some of the levels of the categorical variables that were in the Evaluation data set were NOT in the Validation data set. So when you use the Model Comparison Tool, it can't find some values in the Eval set that were in the Validation Set.
For example:
Variable "Business Type" in the Evaluation set has levels of "Pizza Shop, Auto Repair, Glass Cleaning". But in the Validation set, the levels are "Pizza Shop, Auto Repair, Glass Cleaning, Car Wash".
When it goes to do the Model Comparison, it looks at all the levels and sees that it can't trace all the ones in the Validation set back to the Model itself, which was built using the Eval set.
I've built into my model builds a step where I check the Eval set categorical variables levels against the Validation set categorical variables levels. If there are mismatches, then I force some observations in so all levels in the Eval and Validation sets are accounted for.
