This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
The 2022.1.1.30569 Patch/Minor release has been removed from the Download Portal due to a missing signature in some of the included files. This causes the files to not be recognized as valid files provided by Alteryx and might trigger warning messages by some 3rd party programs.
If you installed the 2022.1.1.30569 release, we recommend that you reinstall the patch.
I'm also interested in how best to do this. It does seem like the AuROC should be included in the interactive report output of the logistic regression tool (either on the summary or on the ROC chart itself).
It is shown when using the model comparison tool, but that tool doesn't seem to work with the new version of the logistic regression tool (i.e., V1.1). You could obviously calculate this using the 'R' tool itself, but I'm sure you were looking for a easier solution.
Any idea why I would get the following error: "Error: Model Comparison (5): Tool #3: Error in c_names[apply(all_proba, 1, which.max)] : "?
Simple logistic regression, I have the model (O anchor of Logistic Tool) connected to the M anchor on the Model Comparison tool, and I have the V anchor of the Create Samples Tool going to the D anchor of the Models Comparison. I left the positive class option blank in the Model Comparison tool. Can't figure out why it's not working. I'm still on 10.5 so that's not the issue either.
Jamie - after much research and playing around and digging............for me, this happened to be (ugh, hate to say it) user error.
What was happening is this.....
In the data set I used to build the model, some of the levels of the categorical variables that were in the Evaluation data set were NOT in the Validation data set. So when you use the Model Comparison Tool, it can't find some values in the Eval set that were in the Validation Set.
Variable "Business Type" in the Evaluation set has levels of "Pizza Shop, Auto Repair, Glass Cleaning". But in the Validation set, the levels are "Pizza Shop, Auto Repair, Glass Cleaning, Car Wash".
When it goes to do the Model Comparison, it looks at all the levels and sees that it can't trace all the ones in the Validation set back to the Model itself, which was built using the Eval set.
I've built into my model builds a step where I check the Eval set categorical variables levels against the Validation set categorical variables levels. If there are mismatches, then I force some observations in so all levels in the Eval and Validation sets are accounted for.