This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I have several workflows that I need to run in steps. For instance, I need my updates table to update AFTER I have run a portion of the workflow that brings in the data and compares the old data to the new data. Here is an example from my smallest workflow:
1. I bring in my products from my ERP System along with the total inventory and the available to promise.
2. I then pull my product list from CRM (Not all of our products are in CRM, just the project-based products).
3. I compare the new data with the prior data, and with the product data to get the necessary data.
4. I create a new table with the data that is updated so that a MS Flow can run and update CRM with available to promise inventory.
There are more steps here, but you get the point. In all, there are 10 steps to this workflow that I am currently having to schedule as separate workflows. This isn't a problem right now except that I have 30 different jobs that range from 10 - 20 steps that have to be run in sequence. This can often cause issues with data availability in our company. Maybe I just don't know how to apply them properly, but the CReW Macros don't seem to provide the necessary functionality either. We are a mid-sized global manufacturing company, and I don't know that many of our use cases were thought of in development of Alteryx. I love the tool, and hope we can continue using it, but I have to solve these issues before renewal.
It seems to me that Alteryx could solve a lot of users' problems by simply allowing us to create containers around each portion of a workflow and then applying a sequence number to each container.
Inside a workflow, use Block Until Done. You can do things through dependent based order within the workflow this way. CreW is helpful in chaining and using dependency between workflows but inside the one workflow use Block Until Done.
To be frank pulling data in memory should not require dependency unless you are doing a dynamic input of some kind. If you know the table and the extraction is independent you can bring it in memory at any point and it can sit there until you are ready to process it accordingly (that can be with step 2 or 3 in the block until done for example).