This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Find answers, ask questions, and share expertise about Alteryx Promote.
Alteryx Gallery is experiencing a problem in which system emails are not being sent out. As a result, if you are attempting to sign up for a new account, you may be unable to verify your email address. We are working to solve this as soon as possible and will remove this notice once resolved.
"So you have constructed the most bad-**bleep** predictive model based on the painstakingly prepared data set. Just like with anything else in our lives (unfortunately) nothing lasts forever though. The predictive ability of your model decays over time. How can you approach this problem and fix it with Alteryx Server & Promote?”
I would like to share this short write-up based on FAQs from customers I get asked relatively often when talking about Promote.
This time shortly on the topic of Alteryx Promote vs Concept Drift, Data Drift Drift, Model Decay, and Model Retrain.
Note to self: I love sharing stuff with the Alteryx Community. But I must also admit that over time I have grown to enjoy beating my team lead @ShaanM to the number of posts.
Predictive modeling is about building models from historical datasets and then using the models to make predictions for the new data. This could, for instance, mean building a classification model to predict which customers are likely to respond to our marketing campaigns in the future, so we can increase the effectiveness of the marketing targetting, decrease expenses and increase the sales bottom number.
Data change over time
The data you are working with and use for your predictive model can change over time. This may cause less and less optimal business decisions as time goes based on predictions from such a model for the same data.
Back to the marketing campaign example from above - as the marketing team uses the model to drive their decisions for quite a while now, customer's purchasing patterns and marketing response behavior also (could have) changed. The model may no longer capture new reality anymore.
Drift Drift in data science refers to the fact that your model loses its predictive-ability.
Technically speaking, changes in the relationships between input and output data occur over time.
There are actually two types of drift. We talk about data drift and concept drift - generally, these are also the reasons for your model decay.
With data drift - collected data evolve over time potentially introducing unseen patterns and variations in the data. And, with concept drift, the interpretation of data changes over time even though the distribution in the data does not.
"In most challenging data analysis applications, data evolve over time and must be analyzed in near real time. Patterns and relations in such data often evolve over time, thus, models built for analyzing such data quickly become obsolete over time. In machine learning and data mining, this phenomenon is referred to as concept drift.” Source: Whitepaper on drift by dept of Computer Science, Finland, 2014
What to do about drift? To fix the data drift, new data needs to be labeled to introduce new classes and the model retrained. To fix the concept drift, the old data needs to be re-labeled and model retrained.
Continuous re-labeling of old data and retraining of models can still be an expensive exercise. You just don’t want to guess when the model goes stale. We want to be able to track/ monitor concept drift and act on it as needed.
How to monitor (concept) drift and retrain your model? Let's assume you build a model on labeled data and obtained the model performance metrics, say the f1-score for test data using the same model. As part of your business decision, you defined your least acceptable F1 score to be 0.925. Now, you get another test set of labeled data to check how your model performs.
The test set is compared with predictions from the latest model at hand. When the f1-score of the sample falls below a threshold (0.925 here) we trigger a re-label/re-train task. The model needs to be re-trained using the updated labels so as to recover its predictive ability.
How to approach this with Promote & Server Let's say that you are planning to deploy your models using Alteryx Promote. And you want to be able to track drift of your models. Based on the previous paragraph, detecting drift would be all about having your model score a (random) test set consisting of labeled data and checking how your model performs.
Getting the performance metrics of your model
This could easily be done for instance using Alteryx Designer and sending, say, 1000 records of your test set against the Promote model using the Score tool. Obviously, it is up to you to decide what are your performance metrics. This may be for instance F1 score, AUC, etc. You will need to calculate these metrics yourself as every single customer and model will have different needs.
Storing the performance metrics over time
The calculated results can then be pushed to your database or some other type of persistence store.
Moreover, together with Alteryx Server, you could schedule your workflow to check the performance of your models daily (or anything).
This all then lets you utilize the visualytics tool of your choice (Alteryx, Tableau, PowerBI, …) to plot the performance metrics of your model over time.
You could then have your data science team monitor the model performance and drift/ decay centrally from your dashboards built on the top of this data.
Simple Example of Getting the performance metrics
Obviously extremely oversimplifying here but here is an indicative workflow how this could work:
Model rebuild and redeploy with Server Also, in combination with Alteryx Server, if you get to the point that your least acceptable model performance criteria are not met, you could trigger the model rebuild and redeploy. This would, of course, depend on whether your model is based in Alteryx, Python or R but for all these types you should be able to achieve this without too much effort.
Simple sample of rebuild/ redeploy
Again, a very simple indicative workflow could be built using runner tools.
The workflow all the way on the left would just retrieve your model's performance results, compare them against your minimal acceptable values and, if less than you want them to be, trigger a model retrain and model redeploy (which will differ based on the approach used to create the model - Designer, vs R, vs Python.