Get Inspire insights from former attendees in our AMA discussion thread on Inspire Buzz. ACEs and other community members are on call all week to answer!

Alteryx Designer Desktop Discussions

Find answers, ask questions, and share expertise about Alteryx Designer Desktop and Intelligence Suite.
SOLVED

Is there an easier way to iteratively remove rows and re-query the same data set?

jdsimcock
6 - Meteoroid

Apologies if this is a little convoluted, I am optimistic that there is a simpler way I just don't know what it is, hence the request for help...

 

I have created a workflow and related batch macro to firstly identify the first occurrence of a duplicate in a data set and then remove both the related rows (pair).  I then re-query the same (now updated) data set for the possible first occurrence of a duplicate (looking for the next pair), and so on until all the pairs have been removed.  I am doing it this way as one of the pair is a negative amount whereas the other is a positive amount and so they cancel each other out and some can be removed.  It is possible for more than one positive match for each negative but obviously I can't remove all matches because that would not sum to zero so have to remove in pairs and then re-query.

 

In order to achieve the above, I have found myself (relatively new to Alteryx) having to write/append the pairs found to a external file, which I then read on the next iteration for items to ignore, so gradually as I pass through each iteration the items to ignore gets longer until no more matches are found.  This iterative process may be inevitable, maybe not, but the thing that troubles more most of all is that writing/appending to a temp file is very slow and I am almost sure that there is a better, faster way to do this, hence the call for help comments.

 

Remove Contra EntriesRemove Contra EntriesBatch MacroBatch Macro

12 REPLIES 12
DavidxL
9 - Comet

Agree that @jdunkerley79's approach scales the best. May want to adjust the Multi-Row formulas group_by (alternatively use the Tile tool) to prevent edge cases where transaction amounts are coincidentally exactly equal between transactions of different groups.

jdsimcock
6 - Meteoroid

Hi @jdunkerley79

 

This is fantastic.  Very quick and elegant solution.  I have tried it on my full dataset with great results so far.  I am checking one final scenario today but as with all the others so far, I expect it to pass.

 

Many thanks for your (and everyone else's) response and investing your time in help me.

 

Jonathan.

Data_Alter
8 - Asteroid

Hi @Claje , is there a way to add logic to same workflow that "Document Number" is not same in the exact matches. I am working on a use where this will be very helpful.

Labels