However, this only works when the Tool ID of the Test Tool is less than the Tool ID of the Output Tool. 17 vs 18 in the example.
I placed my Test Tool in my workflow after discovering a potential issue so it's tool ID is the highest in the workflow (113). When I run, it's the last tool to run, so it won't throw an error until after all files are written to.
Block Until Done doesn't help as it only really blocks on output tools (maybe it needs to be re-named Excel Enabler Tool).
There has to be a better solution than to delete and re-insert all my other tools so that their Tool ID is higher than 113.
Edit: I added a small demo where the output is run (ToolID = 3) Before the Test Tool is run (ToolID = 5)
Try leveraging the "annotation" capability in the configuration? Click on the tool, say Test Tool, then use annotate to assign ID number that will make your Test tool higher than all other tools...versus delete and re-insert.
Else, share a workflow example that we can play with...
Hi RN02, unfortunately that field is immutable (at least in 2018.4 with my permissions) and also, I would need the Test Tool ID to be lower than my other tools in order for it to run first. That would require finding an ID that isn't being used by any tool in the workflow, which may not be possible. Either that or increase the Tool ID of all the tools that I wish to run after, which is about just as tedious as deleting/re-inserting.
Have you considered using the Message tool instead of the Test tool. It sits inline with your workflow and can be configured to error and stop passing records through the tool. You may need to do extra work to add the criteria to data though, since the option that would interest you is break "Before rows where expression is true"
I will be conducting a session at inspire about defensive configuration. I'll be discussing topics like this and encourage you to join in the session.
Stopping the workflow vs don't output data might be the question. Can you prevent data from escaping the safety of the workflow and not risk putting out bad data.
Ideally an output configuration would block all output until your defense has had a chance to review all conditions before it. I have a trick for that. I count my error records and use an append fields tool in front of a filter. Only zero defect data (all or none) can pass. If a single defect is encountered the filter passes no data. There are additional steps post the filter to cleanup and define the output.
Alteryx ACE & Top Community Contributor
Chaos reigns within. Repent, reflect and restart. Order shall return. Please Subscribe to my youTube channel.
That worked exactly as I was hoping the Test Tool would! Thanks so much!
Thanks for that suggestion and that really clever solution with the Append Fields - I'll have to keep that one in mind for the future. In my workflow, I'm not exactly trying to get rid of bad data, but detect when there's multiple sets of data flowing in from one source (both being acceptable when run on their own, but not together) and let the user know to clean up their input before trying again. Maybe there is a way to do it with the Append Fields and a Summarize - Count Distinct, but for right now, I'd rather just kill the workflow.
Unfortunately I'm going to miss Inspire this year. I hope you'll host that session at Inspire 2020 as well!