"So you have constructed the most bad-**bleep** predictive model based on the painstakingly prepared data set.
Just like with anything else in our lives (unfortunately) nothing lasts forever though.
The predictive ability of your model decays over time. How can you approach this problem and fix it with Alteryx Server & Promote?”
I would like to share this short write-up based on FAQs from customers I get asked relatively often when talking about Promote.
This time shortly on the topic of Alteryx Promote vs Concept Drift, Data Drift Drift, Model Decay, and Model Retrain.
Note to self: I love sharing stuff with the Alteryx Community. But I must also admit that over time I have grown to enjoy beating my team lead @ShaanM to the number of posts.
Predictive modeling is about building models from historical datasets and then using the models to make predictions for the new data.
This could, for instance, mean building a classification model to predict which customers are likely to respond to our marketing campaigns in the future, so we can increase the effectiveness of the marketing targetting, decrease expenses and increase the sales bottom number.
Data change over time
Thanks @DavidM I had this marked for a while to read and this is really slick. I especially like the last part where you can essentially automate that entire check, retrain, and redeploy WHILE also having visibility over time where it drifts and gets corrected 🙂
Question is it likely or common in that when a model gets retrained it gets better to the point we have to change the least acceptable score upward? How would we know?
@joshuaburkhow thanks for very positive feedback. much appreciated.
I think that typically the least acceptable score would be something required by the wider team including the business side who would agree on what that should look like.
Surely possible the model gets better and better and at some point you may increase those thresholds for sure.