Want to get involved? We're always looking for ideas and content for Weekly Challenges.
SUBMIT YOUR IDEAIf am not mistaken, the Expert Exam is also only two hours.
After have spending more than double the time on this one question, I either need to really step up my Alteryx skills, or seriously downgrade my expectations on parsing the Expert exam.
So in the end, all this seems pretty simple and straight forward, but only it you know this stuff deeply and is a statistician (which I am definitely not).
I have tried to widen my horizon here, and as I mentioned - spent a lot of hours on this, mostly reading up on all the links I could find on this, but it is sure heavy stuff.
Take the decrease Gini coefficient for instance. What I have read then this is way of derimine how good the model is in the decision tree, and the lower the value is, the better (pure) you model is. If that is correct understood, then I think it is odd, that we should look at the ones with the highest value 😕
Still Climbing
/Verakso
This was an interesting question.
-The H values were all Binary categorical variable whereas the other variables were linear.
-I used the Logistic Regression and then stepwise to eliminate the variables and then further used a Spline Model in order to see which of the linesr variables were of greatest importance as well as using a model without the categorical and examining the importance AND using stepwise to see which were eliminated.
-Very interesting Challenge.
Matt
Hi Verakso,
I will attempt to share some explanation to this one, hoping that if any of it doesn't make sense or is incorrect then someone more knowledgeable will come to rescue..
The way I approach to understand the Gini Importance is to go "backwards", starting from the surface level of interpreting the result:
Regarding the Nested Test, it's essentially using a likelihood-ratio test (LR test) to compare the goodness of fit of two models. (https://en.wikipedia.org/wiki/Likelihood-ratio_test). The null hypothesis is that the full model is not better than the reduced model. When chi sq value is large enough (compare to the threshold), which also means the p-value is small enough, we can reject the null hypothesis, meaning we can conclude that one model (the full model) is better than the other (the reduced). In this case, since the chi sq is quite large, and the p-value is very small (typical threshold is 0.05), we can say that the effect of removing F_38 from the full model is SIGNIFICANT. The below table shows how to get to the p-value (range) from the chi sq (but Alteryx / R already gives you the p value so you don't need to worry about the conversion).
I hope this helps, and please let me know if anyone sees anything that I said was faulty...
Bingqian
Good fun reading around all this, and this wouldn't have been my immediate choice for Friday !
Wish I had known then what I know now. I skipped over this in my first attempt at the Expert exam. Took me 9 minutes, start to finish, for the Challenge, and that included some reading of documentation.
14