I can't believe I'm not finding the easy solution here. I have three groups: a control group, a group that got product A, and then a group that got product B. There has got to be a way to test the differences across all groups rather than running separate t-tests (which introduces type I error several times). If my outcome is the percent of people who were contacted, I want to see if the percent is different across groups.
Control Group % who were contacted: 10%
Product A group % who were contacted: 25%
Product B group % who were contacted: 33%
I shouldn't have to run a t-test comparing control to A, then another comparing control to B, and then a third comparing A to B. I know the method is pairwise comparisons but I'm not finding how I can do this in alteryx and I've looked on the community and surprisingly the answer seems to be "you can't" but this is not a rare statistical test!
Solved! Go to Solution.
This macro is specifically oriented to a categorical outcomes. The right way to address continuous outcomes modifying the current Test of Means tool to correct the p-values for multiple comparisons. Since it involves modifying an existing tool, it makes sense to put in a feature request for that.
Dan
Here's an example solution that doesn't involve a macro.
Pretend you're managing a telephone marketing campaign. For each lead to be called you have their wireless phone number and their wireline phone number. You want to know if there's a higher proportion of calls with sales made on wireless vs. wireline, and the confidence about if there is a statistically significant difference.
All you to run the workflow are a binary categorical variable (in my case, wireline vs. wireless contact number) and a Boolean response code (in my case, sale made on the call or not).
User | Count |
---|---|
19 | |
15 | |
15 | |
9 | |
8 |