Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!

Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.
SOLVED

AUC value in ROC Chart

derekt
5 - Atom

Hi Experts,

 

When I tried to use Logistics Regression in Alteryx, there is a result to show ROC Chart.

 

Since I require to get AUC (Area Under Curve) to know the model performance, may I know how to get the AUC value in the ROC Chart?

 

In R, it's very simple to get that value. Below please find the R Code and attached graph for your reference:

mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
mylogit <- glm(admit ~ gre, data = mydata, family = "binomial")
summary(mylogit)
prob=predict(mylogit,type=c("response"))
mydata$prob=prob
require(pROC)
g<-roc(admit ~ prob,data=mydata)
plot(g,print.auc=T)

 

Appreciate much for your advice! Many thanks.

 

Regards,
Derek

14 REPLIES 14
klonergan
8 - Asteroid

I'm also interested in how best to do this.  It does seem like the AuROC should be included in the interactive report output of the logistic regression tool (either on the summary or on the ROC chart itself).  

It is shown when using the model comparison tool, but that tool doesn't seem to work with the new version of the logistic regression tool (i.e., V1.1).  You could obviously calculate this using the 'R' tool itself, but I'm sure you were looking for a easier solution.  

SophiaF
Alteryx
Alteryx

There is a known issue where the Model Comparison tool does not currently work with the Linear Regression, Logistic Regression, and Decision Tree tool versions released with Alteryx 11.0.

 

As a workaround, change the tool version to 1.0:

  1. Right-click a Linear Regression, Logistic Regression, or Decision Tree tool.
  2. Point to Choose Tool Version and select version 1.0.
Sophia Fraticelli
Senior Solutions Architect
Alteryx, Inc.
derekt
5 - Atom

Does Model Comparison tool include AUC value in ROC chart if logistic regression model v1.0 is applied?

If yes, I would like to try it out. Thank you.

SophiaF
Alteryx
Alteryx

@derekt - yes, the AUC value is included in the output of the Model Comparison tool

Sophia Fraticelli
Senior Solutions Architect
Alteryx, Inc.
derekt
5 - Atom

Many thanks, Sophia~

mbarone
16 - Nebula
16 - Nebula

Any idea why I would get the following error: "Error: Model Comparison (5): Tool #3: Error in c_names[apply(all_proba, 1, which.max)] : "?

 

Simple logistic regression, I have the model (O anchor of Logistic Tool) connected to the M anchor on the Model Comparison tool, and I have the V anchor of the Create Samples Tool going to the D anchor of the Models Comparison.  I left the positive class option blank in the Model Comparison tool.  Can't figure out why it's not working.  I'm still on 10.5 so that's not the issue either.

derekt
5 - Atom

FYI that I'm using Alteryx v11.3 and logistic regression v1.1  for this.

jamie1
5 - Atom

I am seeing the same error as mbarone. Has there been a solution for this?

mbarone
16 - Nebula
16 - Nebula

Jamie - after much research and playing around and digging............for me, this happened to be (ugh, hate to say it) user error.

 

What was happening is this.....

 

In the data set I used to build the model, some of the levels of the categorical  variables that were in the Evaluation data set were NOT in the Validation data set.  So when you use the Model Comparison Tool, it can't find some values in the Eval set that were in the Validation Set.

 

For example:

 

Variable "Business Type" in the Evaluation set has levels of "Pizza Shop, Auto Repair, Glass Cleaning".  But in the Validation set, the levels are "Pizza Shop, Auto Repair, Glass Cleaning, Car Wash".

 

When it goes to do the Model Comparison, it looks at all the levels and sees that it can't trace all the ones in the Validation set back to the Model itself, which was built using the Eval set.

 

I've built into my model builds a step where I check the Eval set categorical variables levels against the Validation set categorical variables levels.  If there are mismatches, then I force some observations in so all levels in the Eval and Validation sets are accounted for.

 

Labels