Edit on original post in blue
Hi,
I am looking for some guidance in comparing forecast errors as part of the Predictive Analytics tools in Alteryx.
Background
I have set up a forecast of the ordered units that will be cancelled compared to the initial orderbook.
For that I have set up:
The idea is to take the most meaningful forecast model
Results
I modified the results for readability in the overview below, which is the output from 2 TS Compare tools. For the forecast in percentage, the ETS model scores best on all error measures. For the forecast in units this is the ARIMA covariate 2 model. I am also looking to optimize my forecast by combining the best 2 models, which should reduce bias and variance. For example, here is a paper that discusses how combining forecasts improves the accuracy: http://repository.upenn.edu/cgi/viewcontent.cgi?article=1005&context=marketing_papers
My questions:
Forecast on percentage
| Forecast in units | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Actual and Forecast Values:
| Actual and Forecast Values:
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Accuracy Measures:
|
Accuracy Measures:
|
Thanks for your help!
Regards,
Bart
Solved! Go to Solution.
Accuracy measures
I have been in contact with Alteryx and they confirmed my understanding on the error measures:
Additionally, Chapter 2, Section 5 of Hyndman and Athanasopoulos's online book Forecasting: Principals and Practice provides a good discussion of the measures used to assess forecast model accuracy (http://otexts.com/fpp/).
Explaining accuracy to audience
Based on several e-learnings I went through, I will use the terms 'bias' and 'variance' to explain to my audience the accuracy of the forecast models. With bias indicating the average distance from actual and variance indicating the spread of the predictions. I think this will create a better understanding as they have no background in statistics.
Forecast model outcomes
Alteryx gave me feedback to have a closer look at the ETS model:
"One thing I noticed in your results is that you have a single value for all observations in the ETS forecast. Typically this means that there's not enough "signal" in your data, so the tool is returning the average value. Perhaps check the decomposition plots for your ETS model - see if there's excessive noise causing that static result."
When asking for direction, they replied:
"As far as reducing noise goes, have you used the Data Investigation tools yet? From there, you can get a better understanding of your data and the metadata. And try different Model, Seasonal and Trend Types – additive, multiplicative, etc, as well as your Information criteria."
Something did I noticed, was that my data was not sorted on date, but on another field. After sorting on dates, I noticed that it helped to make the forecasting model (more) meaningful and not have a flat line. I did not realize the forecasting tools do not do that automatically based on the date field provided.
Combining forecasts
The link in my post above provides a good direction on combining forecasts.