Showing results for 
Search instead for 
Did you mean: 

Alteryx designer Discussions

Find answers, ask questions, and share expertise about Alteryx Designer.

The highly anticipated Alteryx Community tradition is back! We hope you'll join us!

Learn More
We will be upgrading the Gallery to our 2019.4 release this Saturday, December 7th beginning at 9:00am MT. We expect the outage to take last approx. 2.5 hours.

Defining cutoff for binary outcome in model comparison



I'm trying to compare models with model comparison tool.

What I don't know is what is the cutoff that the tool choose to decide whether it's 1 or 0.

How do we define the cutoff or is there any tool to find the best cutoff?

Highly appreciate any help.

Thank you,


Alteryx Alumni (Retired)

Hi @irambach


I'm not exactly sure what you mean by the cutoff. The data you feed into your various models should have a binary target variable that has 1 or 0 in the data itself. So the Model Comparison is just asking which value, 1 or 0, is the positive in your case. So if 1 means "Yes" for you, then you would put this.


If I'm understanding you, there is no cutoff from the models - if there is 80% 1 and 20% 0 in your data, it'll use these to train your model. If you want something different, you can use the Oversample Tool to change the percentages, so its an even 50/50 distribution.





Thank you @MichaelF for your reply!


My question was on how do I choose the cutoff after the model produce the probability.

Is the model automatically predict the outcome with higher probability?

For example, let's say logistic regression model produce outcome of 0.45 for 1 and 0.55 for 0 can I choose that it will predict 1 and not 0?

When I create it in basic R I can program it to choose what is the best cutoff for my model that gives me the best results (confusion matrix, AUC etc.) Is there any way to do it in any of the alteryx macro tools?




Alteryx Alumni (Retired)

Hi @irambach


So the cutoff isn't hard coded in the tools themselves, but rather embedded in the libraries. So for instance, logistic regression uses ROC. If you wanted to adjust this, you'd have to change up how the Tool makes the predictions by hard coding it in the Tool itself.