This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Is there a way to avoid the Cross Tab Tool from transferring all the input information alphabetically? Simply add a RecordID to your records and add the RecordID field as a grouping field in your Cross Tab Tool to keep the order!
You may have noticed that the Cross Tab tool will insert underscores in your headers. There may be times when this is ok, but for those times when it’s not, how do you get the original format of the data back - without the underscores? Attached is an example that provides a solution. Feel free to bypass the rest of this posting and go directly to the example.
If you'd like a narrative of what is going on in the example, read on. The Text Input tool has data that has the following data:
We want to cross tab our data, so fields in Dept Name become our new headers with Cost in Millions as our values.
So now our data looks like this:
where the headers have been changed to include underscores. Very uncool. So in a separate workflow stream, we're going to get the original format.
Attach a Select tool to the Text Input:
And select Program Type and Dept Name. We don't need the Cost in Millions anymore. We going to use the Dept Name - which is in the original format - and eventually use this to replace the data with underscores.
After the select, our data looks like this.
However, we only need one group for Dept Name. You can see it the Dept Name is repeated for each Program Type. So attach a Unique Tool and check Dept Name in the configuration window.
Now we're left with the following data:
This is where it gets interesting. We need to make a copy of the Dept Name, but we want to make it look exactly like the headers with the underscores in it. Why? We'll use this new field to align our data in sequent Union tool. In the meantime, back to creating the new field with underscores in the data.
This REGEX_Replace expression will take the current Dept Name field and replace the colons and spaces with underscores. In the real world, your actual data may contain other characters that were replaced with underscores by the Cross Tab tool. You'll need to modify this expression accordingly. In this example, the result of the REGEX_Replace statement looks like this:
where Dept Name2 looks like the headers from the Cross Tab tool.
So rather ironically we're going to cross tab this data. Connect a Cross Tab tool and in the configuration, we'll group by Program Type, make the new headers Dept Name2 and the values will be Dept Name:
The result looks like this:
We're ready to union our data at this point. Since the headers from both Cross Tab tools look the same, we can select 'Auto Configure by Name'.
Make sure the Cross Tab with the original format is attached to the Union Tool first.
By specifying the output order in the Union Tool configuration, we ensure the original field data is on top (rather than the bottom).
For the grand finale, we'll add a Dynamic Rename tool and configure the tool to 'Take Field Names from First Row of Data'. Notice that Program Type is not checked in the configuration. That's because this field is already in the format we want, so we unselect that field.
Now we have our data, cross-tabbed, with the original format. Our headers look the way we want them to!
You know that a huge part of conveying information to your audience is your visual presentation. Here's a way to increase the amount of information that can be shared with just a glance: Segment the data in a chart with color.
In a previous article , we've shown you how you can upload to FTP using the Download tool. With the release of Alteryx 10.5, the Download tool now supports uploading to SFTP . With this addition, we'll take the opportunity to show you some more examples of uploading data to SFTP/FTP and show you how seamless it can be.
So we’ve generated our list of Key Values (in Part 1) to be read by the Tree Interface tool and when we look at the Tree Interface it looks correct.
So how can we use these selected values in a workflow?
As a review, our original data consists of very pretend pet data.
Now we need to associate the Key Values to our data so that our data can be queried. This can easily be done by adding a couple steps to the workflow we create in Part 1 (updated workflow is attached).
By adding a Transpose tool, the table can be manipulated to get each possible Key Value for each record. Then a Select Tool to rename the Value field to Key. Those results are saved to a new file (Pet data with key values.yxdb) that will be used in the app.
When values are selected in a Tree, a list of the Key Values are returned with a line break between each value. In the above Tree Interface example (Figure 1) the Key Values 111 and 26 are returned as a string data type and would look like this:
Setting up the App
We now have all the parts we need to create the app: Pet Key values.yxdb (from Part 1) and Pet Data with Key Values.yxdb
The start of the app is a Tree Interface tool that points to the Pet Key Values.yxdb file, and an Input tool pointing to the Pet Data with Key Values.yxdb file.
We know the Tree Interface returns a list of Key Values, so we can then filter our pet data on the Key Values. A Filter tool is added with a template filter that will ultimately be updated by an Action tool, once we connect the Tree Interface to the Filter tool.
Now let’s connect the Tree tool to the Filter tool and configure the Action tool.
We are going to select the Action Type ‘Update Value with Formula,’ and update the full Expression. Now for the Formula: click on the ellipsis (…) to add this formula:
'[Key] in ("' + REGEX_Replace([#1], '\n', '","') + '")'
This expression will build an IN statement (Figure 5) based on the selected values from the Tree Interface. If the returned values from the Tree tool are, as in Figure 4:
The expression will build a string starting with: [Key] in (“
The REGEX expression looks for all line breaks (or newlines) in the connection 1 ([#1]) value, denoted by the \n, and replaces those occurrences with: “,”
And then finishes the expression with: “)
The final expression looks like this: [Key] in (“111”,”26”)
This will replace our template expression when the app is run.
To test your Action tool expression, use the Interface Designer’s Debug tool. Open the Interface Designer from the View menu in Alteryx. This Community article goes into more detail on how to use the Debug tool.
With that, the tree tool is complete. The Key Values from the selected Tree Interface items are used to select the desired records from the dataset for further processing.
Add a Browse tool to the True output of your Filter to view the results when the app is run. Be sure to configure your app to show the results of the Browse when the app completes. This setting can be found in the Interface Designer, under Properties.
As with many things in Alteryx, there is usually more than one way to accomplish a task. In the attached App example workflow, there is an additional method to take the results from the Tree tool and query the pet dataset.
All related workflows are attached to the post and saved in Alteryx 10.0 format.
We frequently get questions about how to flag rows in a data set that are missing values in any column.
In the example below, we will demonstrate how to add a new column to the data that gives a count of null or empty values per row. The end result will give a count of 1 to each of the highlighted rows in the image. The new column can be used in a filter tool to isolate rows of data that have missing values.
The RegEx tool is kind of like the Swiss Army Knife of parsing in Alteryx; there are a whole lot of ways you can use it to do things faster or more effectively, but even if you just use the blade it's still immensely useful. Sometimes that's all you need, but if you do take the time to figure out how to use a few other tools in that knife, you'll start to see that there isn't much you can't do with it .
The Transpose tool pivots data for all selected fields. The column headers are listed in the name field and the corresponding data items are listed in the value field. You can also select key fields which will remain unchanged through the transformation. The transpose tool is often used in conjunction with the Cross Tab tool, which essentially works in the opposite direction.
Occasionally you may see one of these errors from the Join Multiple tool. It is a result of Cartesian joins.
A Cartesian join is when you join every row of one table to every row of another table. You can also get one by joining every row of a table to every row of itself. A Cartesian join is very CPU intensive.
For example, if you have four files and each file has the same ID in it twice, that means it will join 2*2*2*2 times on the ID (the field on which you're joining is the key referenced in the error; in this example, it's Field1, and the duplicated value is 12345). The same can be caused by multiple nulls in each file.
After your data prep and investigation, and when you know your data are correct, your choices on how to handle Cartesian joins include:
Allow multidimensional joins: The multidimensional join will occur with no error or warning reported.
Warn on multidimensional joins of more than 16 records: A warning will be reported in the Results window that a multidimensional join has occurred.
Error on multidimensional joins of more than 16 records: An error will be reported in the Results window that a multidimensional join has occurred and downstream processing will stop.
How To: Make Left, Right, and Full outer joins
I noticed that a lot of customers in the Alteryx forums were confused about how to make left, right and full outer joins in Alteryx Designer. This KB entry goal is to make it easy and clear.
This is an aggregation in one place of information spread out in multiple sources: Designer help, various discussions on our forums, and public gallery.
First - what does the Join Tool do?
For now, the join tool does a simple inner join with an equal sign. That's it!
In particular: • R output anchor is NOT the result of a right outer join. I know the R letter can make you think this but it is not. • Similarly: L output anchor is NOT a left outer join. I know that got me at first too! See the table below that shows you what each does for the “Join” tool:
Look at the online help to learn more: https://help.alteryx.com/current/Join.htm Second - Can I do outer joins with the Join Tool then?
Yes! You have to combine it with a Union Tool:
- Join Tool + Union of L&J output anchors = Left outer Join,
- Join Tool + Union of R&J output anchors = Right outer Join,
- Join Tool + Union of R+L+J output anchors = Full Join.
Look at the online help to learn more: https://help.alteryx.com/current/Join.htm Third - Download "Advanced Join" tool from Public Gallery
Now, Alteryx is very flexible as you can create your own tools or macros. You can find a lot of these on the public gallery.
There is a tool called "Advanced Join" that Neil Ryan created that works well for more complex joins (including left, right, full/cross etc). You can also apply a filtering condition.
You can find it here: https://gallery.alteryx.com/#!app/Advanced-Join/547f8df96ac90f0f2ca5e439 Download it, follow instructions to install it. It will show up first in the category "Join" in your Designer toolbar.
Fourth - Use "Multiple Join" for Full outer join
In designer click on "open example" link that appears once you click on the "Multiple Join" tool for detailed examples.
As Alteryx improves the "Join Tool", we might see an easier way to do outer joins. But for now these are the few options you have.
You monitor the mileage of multiple trucks as they deliver shipments over the course of a week and record additional information regarding each truck in a file (Truck Metrics). Each truck’s cumulative mileage per day is recorded in a separate file (Truck Mileage). Your goal is to update the mileage-related fields in Truck Metrics with the values recorded in Truck Mileage. Note: today is Tuesday so only fields for Monday and Tuesday will be updated in in the Truck Metrics file.
Manually Select Fields to Update
Whether the data is uniquely identified by a Truck Number (or Record ID) or identically ordered in both files, data from Truck Metrics (the table to be updated) and Truck Mileage (the data used for updating) can be Joined together. Then, using the Select functionality within the Join tool, I can manually choose the fields from Truck Mileage that I want to use to replace the fields that need to be updated (Figure 1).
Figure 1: Fields from Truck Mileage (Yellow) replace fields from Truck Metrics (Blue). Note that fields in yellow are selected while fields in blue have been deselected. Fields that need to be included from Truck Metrics (Red) remain selected.
Fantastic! A simple, straightforward way to update fields! But, as any analyst knows, working with data is rarely simple or straightforward. What if you’re dealing with 20 fields that need to be updated, not just 2? In that case, manually selecting fields to update is not only tedious but also error-prone. For these types of situations, I recommend a process that allows for a more dynamic approach.
'Verticalize' the Data to Dynamically Update Fields
Transposing, or ‘verticalizing’ data, allows for a more dynamic workflow when you have unknowns in your processing. In a scenario such as this one, you may have an unknown or changing number of fields that will need be updated in Truck Metrics. Using this approach, we’ll first Transpose both data sets to configure the Field Name and its associated value in a single row (Figure 2).
Figure 2: The data is transposed from Truck Mileage and Truck Metrics. The highlighted fields in Truck Metics indicates which fields will be updated, as the same field exists in Truck Mileage.
Then, we’ll Join our datasets based on two fields: Truck Number AND Name. This ensures that fields in Truck Mileage will match to the correct fields in Truck Metrics, assuming the fields names in both tables are named in the same way. The only selecting we’ll have to do is to make sure all fields from the Right data source (in this case, Truck Metrics) are deselected (Figure 3). This allows that, in the situation of matched Truck Numbers and field names, the updated values will be used.
Figure 3: The Joined fields indicate the fields that exist in both Truck Mileage and Truck Metrics. Fields in yellow (Truck Mileage) are selected to reflect updated values in downstream tools. Fields in blue (Truck Metrics) are deselected.
Note that any unmatched fields have fallen out of the Right side of the Join:
To add them back into the data stream, simply Union the Center and Right Joins together, setting the tool to “Auto Configure by Name”. Then, to rearrange the data into its original format, use the Cross Tab tool (Figure 4). And, voila! Your data is updated!
Figure 4: The updated fields are highlighted in yellow. All other fields originally included in Truck Metrics are included the in the final results as well.
Recently a couple questions came across the Customer Support desk asking how a fiscal calendar could be incorporated into a workflow. Natively Alteryx doesn’t have a tool to create one, but Alteryx does have a number of tools to make a fiscal calendar. Here is an example of how this can be done.
1. Determine the start and end dates of the fiscal calendar and enter them into Text input tool, each on an individual row
2. Connect a TS Filler tool to generate dates between the start and end dates
3. A Select Tool was added to remove unnecessary fields
4. Add a Multi-Row Formula Tool to create a Day of Week field by assigning each row a day of the week from 1 to 7
5. Add another Multi-Row Formula Tool to calculate the fiscal week. Whenever the Day of Week is 1 add a value of 1 to the previous row’s fiscal week value. This will create a running week value for the entire year
An example workflow is attached. Also in example workflow is an example of how the fiscal month and week of month may be added. (Workflow is in Alteryx 10.6 version.)
The Report Map Tool (master it here) allows to create thematic maps by selecting a "Theme" field on the Data tab and then further defining the theme on the Layers tab, for example:
The above example creates a map of Texas showing average annual rainfall totals where orange is the least rainfall and blue the most:
Pretty nice, right? But what if you want to change the map and instead of applying the theme to the fill color for the polygons, you want to apply the theme to the outline and just show that?
That is a little trickier because the Report Map Tool allows you to adjust the outline color and size of your polygons, it doesn't automatically apply the theme, so a workaround needs to be built.
You could feed in each polygon as an individual layer but that is difficult to manage - to keep the color gradient consistent, making sure they are in the right order. And what if a new rainfall range is introduced? You might have to adjust a couple of layers to account for it.
A better option would be to turn the polygon outlines into line polygons themselves. That would allow you to apply a theme right to the outline polygons.
In order to do this, we will use the following tools:
A RecordID is assigned so that we can pull the data apart and put it back together again.
The polygons are split into detailed regions using the Poly-Split tool and rows flagged as 'holes' are removed.
The polygons are split into points.
Those points are reassembled as a sequence polyline. The create the polyline, the data is grouped by the RecordID to keep each polyline separate. (A polyline contains multiple line segments, where a line has one start and one endpoint, but can have any number of points in between. A polyline can be quite complex as in the case of road systems, or rivers.)
The sequence polylines are joined back to the original data set.
Using the reporting tools to create the maps with rainfall range as the thematic field.
With that workaround you can create a map that looks like this:
For details on tool configurations, see the attached workflow.
The Tile Tool is one of those Designer tools that appears to be nothing spectacular, but secretly simplifies a number of operations that would otherwise be labor intensive. The Tile Tool will never be as important as the Input Data Tool , as versatile as the Download Tool , make something as beautiful as the Render Tool , or be as totally rad as the Lift Chart (I mean just look at that icon, bro). Basically, if the Designer was the Beatles, the Tile Tool would be Ringo Starr. But hey, even though Ringo wasn’t the most important , handsome , or charismatic of the bunch, he still made impact in the world of music. And that’s just what the Tile Tool is about – impact. Check out the ways the Tile Tool can impact your Designer use and significantly reduce the difficulty of some operations:
Unlike a snowflake, it is actually possible for duplicates exist when it comes to data. To distinguish whether or not a record in your data is unique or a duplicate we have an awesome tool called the Unique Tool that will actually turn your data into a unique snowflake.
Thanks to everyone that joined us in Nashville for our 2019 Tips and Tricks "Country Edition" tour and special thanks to everyone who submitted tips for our Tips + Tricks Book.
Enclosed you will find 91 pages full of Tips + Tricks that will help you save clicks in Alteryx. The Tips and Tricks book covers our fan favorite tips for saving time when developing workflows and our best tips for organization. Utilize the book to learn everything from workflow design to organization, optimization, to our best time savers.
We hope to see you all at our next Tips and Tricks session.
@MargaritaW , @JessicaS & @HenrietteH
Your Customer Support Engineering Team
How to check for encoding or formatting issues with Excel worksheets
When Excel worksheets are used in Alteryx Designer, sometimes Designer has difficulty reading the data. Viewing the worksheets in XML format is a good way to check for encoding or formatting issues as all of the encoding and formatting is shown.
Product - Alteryx Designer, Gallery, or Server and an Excel file used in Alteryx workflows or applications
To view in XML, open the Excel worksheets in 7-zip or another application used for .zip files. Use the format C:\folder_path_of_the_workbook\Excel_file_name.xlsx\xl\worksheets. Next, click on the sheet name and the XML for the worksheet is shown.
Notice the type of encoding is listed at the top. Followed by the formatting schemas, and then formatting for the worksheet. If needed you can change the extension of the worksheet from .xml to .txt and edit the encoding or formatting of a worksheet to match a working example, and then change the extension back to .xml.
To edit, right click on one of the sheets, select Rename, and change the file extension to .txt. Edit as a text file, and then Rename again to change the extension back to .xml.
If the Excel workbook is created automatically, such as being created by a script, information needed for Designer to open the worksheets may be missing. Excel can sometimes add this information when opening the worksheet, but Designer is not able to add default encoding and formatting if it is missing. If you come across an issue, it is a good idea to try adding example data directly in Excel and saving as Excel should automatically add the needed encoding and formatting. Afterward, you can compare this with the broken file, and update it to match the working example.
There are other possible reasons why Designer may not be able to open an Excel spreadsheet. However, checking for encoding and formatting issues in XML view will catch many of the problems with opening Excel data in Designer, so it is a good place to start.
Character encoding Microsoft Office XML formats Structure of a spreadsheet
Ever wish you had an easy way to read in data to Alteryx Designer from your Quickbooks Online account? Well, now there's a tool for that.
This article will describe how to make an API call to QuickBooks Online through the QuickBooks Input Tool attached to this article.