Alteryx Designer Discussions

Find answers, ask questions, and share expertise about Alteryx Designer.
SOLVED

Interpretation of the Nested Test Tool Output

Robinvm
8 - Asteroid

Hello community,

 

I am just trying to better understand the results of the Nested Test Tool.

 

I understand from the Alteryx documentation that the hypotheses can be stated as follows:

 

Null hypothesis: The full model and the reduced model are statistically equivalent in terms of their predictive capability.

Alternative hypothesis: The full model and the reduced model are not statistically equivalent in terms of their predictive capability.

 

The lower the p-value, the higher the probability that the null hypothesis can be rejected in favor of the alternative hypothesis.

 

My understanding is such that the null hypothesis embodies the reduced model. If one cannot reject the null hypothesis, then one takes the more parsimonious model. However, if the two models differ, it is better to use the full model.

 

From this it follows for me:

  • p <= 0.05 ==> Take the full model
  • p > 0.05 ==> Take the more parsimonious model

 

Do I draw the right conclusion?

 

Thank you in advance.

2 REPLIES 2
dana_mcdonough
Alteryx
Alteryx

Hi @Robinvm see my research below with reference. I believe you have the correct understanding. Lots of fun double-negative logic! :)

How to Analyze Nested Models

To determine if a nested model is significantly different than a “full” model, we typically perform a likelihood ratio test which uses the following null and alternative hypotheses:

H0: The full model and the nested [reduced] model fit the data equally well. Thus, you should use the nested [reduced] model.

HA: The full model fits the data significantly better than the nested [reduced] model. Thus, you should use the full model.

Reference: https://www.statology.org/nested-model/

LiuZhang
9 - Comet

For hypotheses testing, the null hypotheses is always, there is no difference. Alternative is, there is difference.

As stated in the end of documentation, it is doing F-test (ANOVA I guess) on the regression models. It is a common way to compare two models, additionally I would also suggest to check the adj-R square in both models, it does something similar to tell you if the variables you dropped matters or not.

Labels