This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
We've recently made an accessibility improvement to the community and therefore posts without any content are no longer allowed. Please use the spoiler feature or add a short message in the message body in order to submit your weekly challenge.
We are currently experiencing an issue with Email verification at this time and working towards a solution. Should you encounter this issue, please click on the "Send Verification Button" a second time and the request should go through. If the issue still persists for you, please email firstname.lastname@example.org for assistance.
I noticed that only a small percentage of the Pet and Owner records joined, so I wondered which would be better for run time: 1) using a Join Multiple to join all of the datasets together at once, or 2) filtering the rewards data first, to reduce the number of records in that join (since we only care about records where the reward is over the threshold). Option 2 is implemented above.
I made the datasets larger by generating additional Animal Ids (so each dataset had about 2.6 M records, with approximately the same join rate).
With 20 runs each, option 1 averaged about 25% faster than option 2 (but not significantly). Option 1: mean=8.9 s, stdev=1.34 Option 2: mean=12.2 s, stdev=1.51
I went with a Trade Area tool having a 500 mile radius followed by a Spatial Match, which seems to be unique to this thread. Is there a particular reason to go with the Distance Tool instead? A limitation that I wasn't considering?