This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Hmm, this is a puzzler! The number of permutations you'll need to test through an iterative macro is potentially huge. Can you provide a slightly larger sample of the data, i.e. with more than one total to match?
Interesting problem. This could be one of the weekly challenges!
It's true that the number of permutations could get big very fast. Fortunately, what you are really looking for is the number of combinations (a much smaller number). This is defined by: n!/[r!(n-r)!].
The thing you have to determine beforehand is the maximum number of transactions that can add up to your totals. Here's my attempt with the sample data you provided:
Thanks to JSNelson08 for submitting this - he was helping me with this project internally.
I ended up resolving this with nested macros. The outer macro is a batch macro that does the following, grouping by a field called "contract type" to reduce the data volumes:
1) Add record IDs to the set of records
2) Check to see whether the sum of all prices x quantities for this batch match. If so, don't pass anything to the inner macro, since we already know the total batch is matched.
3) Pass the unmatched items to the inner macro twice, since the macro takes one set of records as a constant and iterates on the other.
The inner macro is an iterative macro that first checks to see if any of its Right input match off against the Left, and if not, does a Cartesian join for its Right input to itself (i.e. creating a matrix of [price x quantity] xx [price x quantity] data, summing [price x quantity] and concatenating the record IDs with a comma delimiter). It then iterates on the joined output.
I had to put a couple of checks into the iterative macro to try to further reduce data volumes however I could. The big one was to take any [price x quantity] combinations that get matched out of the data set entirely, so that the iterative process wouldn't continue to balloon out on the records that already found their match.
Then the macros spit out the matched sets of record IDs and their prices x quantities, as well as anything that remains unmatched.
This was definitely a more difficult project than I anticipated when I was describing it to my office, and huge thanks to JSNelson08 for his help! Obviously this workflow can really start to chug if the data set gets too large, so I set the maximum iterations down to 3. It doesn't always match everything off as a result, but it still does a pretty solid job with most situations. The only ones that it can't match (assuming it SHOULD, i.e. our data set is valid and should be matchable) are sets where there are a large number of records coming through within a single "contract type" batch. I did a back of the envelope calculation assuming each record in the iterative macro (price x quantity, record id) was only 10 bytes, and for a 20-item set, the full Cartesian join is ~2.6MB. A 30 or 50-item batch will be too much for Alteryx to handle in a reasonable amount of time, so I just restricted the number of iterations and we deal with the resulting occasional failures to match as a manual procedure on the back end. This does save us a lot of time even with that limitation!