This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I had a question regarding the precision of the location optimizer.
I have a model that test the outcome for a range of number of locations to add to a network (i.e. add 10, 20, 30... etc. locations and observe the impact on the optimization score). It is a Location Optimizer within a batch macro.
However, when I run this, I observer a few things that raise concerns:
The difference in score does not decrease as I add more locations (as you would expect, as it picks the best locations first. There is some interactions between the locations, but it should always be detrimental, not synergistic)
Even when running multiple iterations of the same model (i.e. 10 in parallel, picking the best result at each stage), do these problems persist
At a given N locations to add, the score suddenly jumps up significantly. This behavior exists across multiple runs for the same N
The optimization scores for the same number of locations added vary by between 1-2% even though I'm running the model at within 0.15% within 400 generations
Unfortunately the documentation on this macro is almost non-existent as far as I can find. The two resources I have found have not shed a lot of light on this.