This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
My issue is, that I want to be able to cache the data, which I can't this way. (since its many gb its loading). And I don't know before the initial data is loaded which id's to fetch and you for the lookup in the 2. part.
I don't think I can help because I don't understand the issue, I don't understand exactly what you're trying to do. In the workflow screenshot you posted with the red boxes, we can't see exactly what you're trying to point out.
I'm guessing the workflow image shows that you're writing out to a YXDB file, then reading in that file in the same workflow, but you don't want to do it this way because the data stream is very large?
I'm making assumptions about the actual problem. Can you specify exactly what you're trying to do, step-by-step?
If you are trying to cache a large data stream, what testing have you done with the Block until Done tool? Is the tool not working right?
Maybe the problem is you're using the CReW macro for "CReW_LabelledBlockUntilDone.yxmc"? That macro seems to send all output from the output anchor 1, so I'm not sure how that one works.
Have you tried to use the native Alteryx macro for Block Until Done?