Be sure to review our Idea Submission Guidelines for more information!
Submission Guidelines
The CrossValidation tool in Alteryx requires that if a union of models is passed in, then all models to be compared must be induced on the same set of predictors. Why is that necessary -- isn't it only comparing prediction performance for the plots, but doing predictions separately? Tool runs fine when I remove that requirement. Theoretically, model performance can be compared using nested cross-validation to choose a set of predictors in a deeper level, and then to assess the model in an upper level. So I don't immediately see an argument for enforcing this requirement.
This is the code in question:
if (!areIdentical(mvars1, mvars2)){ errorMsg <- paste("Models", modelNames[i] , "and", modelNames[i + 1], "were created using different predictor variables.") stopMsg <- "Please ensure all models were created using the same predictors." }
As an aside, why does the CV tool still require Logistic Regression v1.0 instead of v1.1?
And please please please can we get the Model Comparison tool built in to Alteryx, and upgraded to accept v1.1 logistic regression and other things that don't pass `the.formula`. Essential for teaching predictive analytics using Alteryx.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.