I have a requirement to read a bunch of .csv files at once, process them for some calculations and then dump the original rows in another source table. For this, i tried to use Block Until Done tool as shown below, where i expected the flow from output 1 to get complete first and the data dump to happened from output 2. I observed that even when there is an error in the processing part, the data is still reaching to the source table, which is not correct.
Am i missing something here or there is some other way to achieve this?
Also, i found a sort of a problem with the working of this upon some more testing.
I was trying to move the source .csv files to some other folder after processing the data(after writing to a snapshot table but before writing the data to the source table). I am doing this via a .batch file in the event section of the work flow. So the flow in a nutshell is:
Read data from CSV files -- Process the data -- Write to a Final Table -- Move the files to some other archive folder -- Write the source data to a source table.
But I found that if this batch file gives an error(which is basically the end of stream 1), the data is still being written to the source table. Is it happening because this is an event, occurring on an external file and Alteryx is not considering it as an error within the flow?
The Block Until Done tool will not wait for Stream 1 to complete before processing Stream 2, it will just wait until it has finished feeding all the data out Stream 1 before starting to feed data out stream 2. In general, tools in Alteryx know about upstream tools but not necessarily downstream tools.
Hence, If there is an error after one of the Block Until Done tools, the BUD Tool doesn't care... or even know about it...
That might help explain why you have to move the processing upstream, and why the parallel BUD tool is still such a valuable add-on.