is comparison from the the assisted modeling correct?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Greetings All,
I am building a predictive model using the tool "Assisted Modeling". And I am comparing the models ( XGBoost, Random forest, decision tree, logistic regression using the graphs, and confusion matrix using the figures in the tool's report. When I finished the project my professor said that the comparison should be for the testing data not in the training data which I use when using the tool. My question is does Alteryx use something that makes the comparison good? so that I can convince my professor that what I found out from the comparison is true.
Solved! Go to Solution.
- Labels:
- General Feedback
- Question
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Hey, @Mujahed,
Great question I struggled to find good information on this. AIS (Alteryx Intelligence Suite) automatically applies K fold cross validation. That means when you are comparing your models in the Assisted Modelling window you are comparing their test data results (averaged over many folds), this is a fair and arguably the best way to compare models. At no point does Assisted Modelling compare training data accuracy.
I would assume the confusion matrix shows the combined results of each of the three test data fold results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Notify Moderator
Thank you so much for sharing this information with me. It was really helpful.
