This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
This question is highly relevant when looking to circumvent Alteryx error's when you have hardcoded a certain workflow process to expect certain field names. So when you are trying to process multiple files with different field schema's you don't want Alteryx to stop processing all the files if one file fails. You ideally want Alteryx to skip the file and carry on!
Attached is a sample workflow that can help you answer the above question:
The workflow will start with a Directory tool browsing to the folder location that has all your input files
As this example uses xlsx files, we use a Formula tool to append the sheet name to the full path
We then feed this data into a Batch Macro
The batch macro allows us to process one file at a time but each file goes through the same process
The input file within the macro is replaced by the incoming file path
We then use a RecordID tool to allow us to keep the data integrity when we transform the data for the filtering process downstream
In the Transpose tool, we use the RecordID tool as a key field to pivot on. We then transpose the rest of the data fields to create a NAME (This will have your field headers) and VALUE (This will have your data) fields. This part will be dynamic if new fields get added because we have the 'Dynamic or unknown fields' checked and they will fall within the NAME field which will reference in the filter tool
Within the Filter tool, you can now add in the field variables you need for your workflow
On the true side of the filter, you will now have the fields and values which met your criteria and on the False, you will have the fields which did not
The ultimate goal though is to bring back the whole dataset if that file had fields which met your criteria
To do this we use the Formula Tool to add in the file path of the file which we can use outside of the macro to bring together the whole dataset
The final tool is the Crosstab Tool to orientate the data back into its original format using the RecordID Tool as the group by field.
Save the macro (File>>Save As)
Insert the macro into the workflow and connect to the Formula Tool. In the configuration of the macro choose 'Full path" from the dropdown. This will update the input tool and the Formula Tools
The two outputs on the macro refer to the true and false side of the filter. You can now use a Join tool and connect the true and false to the left an right inputs of the Join Tool
The field you will join on will be the full path and RecordID
Now if the file met your condition in the filter it should have values on the left and right of the macro outputs. Therefore, in the 'J' node of the Join tool, you should see the data from your successful file
In the 'R' you should see all the data from the files which did not meet your condition as they don't have anything to join to on the left 'true' side of the filter
You can then paste your desired workflow to the 'J' output of the join tool and continue your data process
This will now only allow files with the desired field headers to pass through and you have circumvented your workflow from breaking if the incorrect field schema from certain files is passed through.
S/O to Shaan Mistry who brainstormed this workflow with me.