This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Fact: workflows are the best. Look it up. They’re all about getting things done and, with hundreds of tools and the ability to integrate external processes, there’s no shortage of things you can get done. We know that there are some areas of analytics that require a little extra firepower, however, and that’s why you can leverage your workflows in apps and macros for added functionality.
There’s a lot going on in the world of analytics. Endless data stores and insight are at the other end of an internet connection and, as analysts, we’re always in on the action. Being in the thick of the fray with data whizzing by at lightning speeds, being equipped with the right tools is a must. Like you, Alteryx also likes to live dangerously, and we’re always ready for action.
Duplicated macros can often occur when you import macros within workflows and when trying to install different versions of the same macro. Fortunately, there are a few steps you can take to clean up your tool palette:
This question is highly relevant when looking to circumvent Alteryx error's when you have hardcoded a certain workflow process to expect certain field names. So when you are trying to process multiple files with different field schema's you don't want Alteryx to stop processing all the files if one file fails. You ideally want Alteryx to skip the file and carry on!
Attached is a sample workflow that can help you answer the above question:
The workflow will start with a Directory tool browsing to the folder location that has all your input files
As this example uses xlsx files, we use a Formula tool to append the sheet name to the full path
We then feed this data into a Batch Macro
The batch macro allows us to process one file at a time but each file goes through the same process
The input file within the macro is replaced by the incoming file path
We then use a RecordID tool to allow us to keep the data integrity when we transform the data for the filtering process downstream
In the Transpose tool, we use the RecordID tool as a key field to pivot on. We then transpose the rest of the data fields to create a NAME (This will have your field headers) and VALUE (This will have your data) fields. This part will be dynamic if new fields get added because we have the 'Dynamic or unknown fields' checked and they will fall within the NAME field which will reference in the filter tool
Within the Filter tool, you can now add in the field variables you need for your workflow
On the true side of the filter, you will now have the fields and values which met your criteria and on the False, you will have the fields which did not
The ultimate goal though is to bring back the whole dataset if that file had fields which met your criteria
To do this we use the Formula Tool to add in the file path of the file which we can use outside of the macro to bring together the whole dataset
The final tool is the Crosstab Tool to orientate the data back into its original format using the RecordID Tool as the group by field.
Save the macro (File>>Save As)
Insert the macro into the workflow and connect to the Formula Tool. In the configuration of the macro choose 'Full path" from the dropdown. This will update the input tool and the Formula Tools
The two outputs on the macro refer to the true and false side of the filter. You can now use a Join tool and connect the true and false to the left an right inputs of the Join Tool
The field you will join on will be the full path and RecordID
Now if the file met your condition in the filter it should have values on the left and right of the macro outputs. Therefore, in the 'J' node of the Join tool, you should see the data from your successful file
In the 'R' you should see all the data from the files which did not meet your condition as they don't have anything to join to on the left 'true' side of the filter
You can then paste your desired workflow to the 'J' output of the join tool and continue your data process
This will now only allow files with the desired field headers to pass through and you have circumvented your workflow from breaking if the incorrect field schema from certain files is passed through.
S/O to Shaan Mistry who brainstormed this workflow with me.
Macros can be designed to treat nearly every and any repeatable operation in Alteryx. If you’re working with formalized, established, processes and prefer not to duplicate work, or you’ve just found a handful of useful macros on our Public Gallery, being able to maintain macros is a must.
You probably already know that you can output results to multiple sheets of an Excel file. If not, you should check out our resource that explains how to do that very thing. But what if you run that workflow every day, and you want to keep the outputs from days past?
Some Excel files, particularly those created by third-party programs, are encoded differently than Alteryx expects. In many cases, such files can be opened in Excel and then saved, resulting in files that Alteryx can open.
Please refer to Designer Knowledge Base article entitled "How to check for encoding or formatting issues with Excel worksheets" for diagnosis tips.
Calgary is a list count data retrieval engine designed to perform analyses on large scale databases containing millions of records. One use case for a Calgary database in an App or Macro is to offer users certain predetermined criteria (e.g. geography) to select and then return the corresponding data to them. A back-end process can combine multiple data sources, format and pre-filter as needed, and load them into a Calgary database that can be queried quickly by the app or macro. This way, the bulk of the processing happens outside of the app or macro allowing data to be returned more quickly.