Alteryx Designer

Definitive answers from Designer experts.
Register for the upcoming Live Community Q&A Session - and don't forget to submit your questions for @DeanS regarding the future role of analytics here.
Tableau Function Translation Guide
View full article
This article covers how to use Run Command events to run a workflow.
View full article
Far more than just a window to your data, the Browse Tool  has a catalog of features to best view, investigate, and copy/save data at any checkpoint you place it.  That introspection to your data anywhere in your blending gives valuable feedback that often speeds workflow development and makes it easier to learn tools by readily visualizing their transforms. Be equipped, and browse through the catalog of useful applications below!
View full article
The  Filter Tool , which is part of the  Preparation tool category , separates your data into 2 output streams, True and False, based on a basic filter or custom expression.
View full article
Characters that are not on a standard English keyboard may need translation into Unicode or a language-specific code page for Designer and database drivers to read them correctly. Characters with incorrect encoding will often appear as boxes or question marks in the Designer Results screen and error messages. Unicode characters take more bytes than English ASCII characters. Changing the column type and increasing the column size may be needed. In Designer, the column size is the number of characters, not the number of bytes.
View full article
How To: Create a Calgary Database   A Calgary Database is a proprietary Alteryx format that allows users to query against a file of millions of records quickly without having to read in all the data. A Calgary Database is created with the Calgary Loader tool, which allows users to create a database from any type of input while selecting which fields to index. Calgary Databases are useful for running ad-hoc queries against a large dataset, e.g. ConsumnerView data.     Prerequisites   Alteryx Designer (any version)     Procedure   Bring the data to be written to the Calgary Database into Alteryx and transform it until it is in the desired format, keeping in mind opportunities to standardize values to make the indexes work better. For example, are all your ZIP codes properly and consistently formatted? Starting with Alteryx 5.0, Calgary Indexes are not case sensitive anymore, treating "CALIFORNIA" and "California" the same. However, if some of your data uses the full state name and some uses the state abbreviation, and you are planning on using state as an index, you should pick one and use it consistently. You might also want to add flags that other users might find useful for querying data. For example, create a flag to indicate the current month's (or quarter's or year's) data or a custom region such as "NorthEast", "South", "Midwest", etc. In the Calgary Loader tool, map the location of the Calgary Database in the "Root File Name" box. The tool will create a .cydb file (the data file), multiple .cyidx files (the index files), and an __Indexes.xml file that contains the index values. Since it will write out multiple files, a best practice is to have a folder dedicated to your Calgary Database. You cannot append to a Calgary database. To add records, rebuild the Calgary database. Use the "Data" and "Index" columns to select which fields to include as data fields and which to index. Typically, all fields are included as data, but only certain ones are indexed as each index takes time to create. For index fields, the index type can be selected. "High Selectivity" is used for data with many different possible values, such as ZIP codes. Select "Low Selectivity" for data with fewer unique values, such as State or Region. "Low Selectivity" also creates a drop-down option for the Calgary Input Tool. By default, the index type setting is "Auto". In Auto mode, Alteryx looks at the first 1 million rows of data and decides if the index should be high or low selectivity. All fields with more than 550 unique values will be set to high selectivity. If the data changes after the first 1 million rows, Alteryx might select the incorrect index type. This option might also take longer to process since Alteryx has to look at 1 million rows of data for each index in Auto mode. Use the Calgary Input tool to read in data from a Calgary database as described here: Querying a Calgary DB / File to Select and Limit Input Records. Did you know that you can read in the .cydb file the same as a .yxdb file in the regular input tool? However, you won't be able to query any of the indexed fields.     Additional Resources   Using Custom Lists to query Calgary Indexes in Apps and Macros Querying a Calgary DB / File to Select and Limit Input Records Building a Calgary Database with "Searchable" Fields Calgary Databases
View full article
How to prepare two inputs for Fuzzy Match Merge Mode   Here is a quick reference guide that will help you prepare two different data sources for use with Fuzzy Match Merge Mode. This mode only compares data from different sources, and it is often used to merge new data back with the primary data source. The Fuzzy Match Tool accepts only one input, so you will need to prepare the data first.   Join the data together, results that match exactly will go through the J output anchor and do not need fuzzy matching.   The remaining data from the L and R output anchors will need record ids for matching. Add record IDs for the first input.   Use the Formula Tool to create a new column for each input showing the source. A function with just a text string in quotes will add that text to each record. Using Merge Mode, the Fuzzy Match Tool will only compare records with a different source.   Ensure the correct alignment of the columns in the Union Tool by using the Manually Configure Fields option.   Sort by the Source column so that all the Input A records with a record ID are placed first in the list.   Record IDs for the 2nd input are added after the join using an expression so that the IDs automatically start sequentially after the number of records in the 1st input.   In the Fuzzy Matching configuration screen, use the new Record ID and Source ID fields, as well as the fuzzy match fields that were stacked together with the Union Tool.     Here you can see all fields needed for Fuzzy Match Merge Mode Configuration are available.      The example workflow is attached. Once the Fuzzy Match merge is complete, there are many options for completing the workflow, such as adding a Unique Tool to remove duplicates and joining the matching records back with the original data. Please see the articles and training videos in the Additional Resource section for examples and more information.   Additional Resources   Tool Mastery Fuzzy Match  Fuzzy Match Tool Alteryx Help Page   Alteryx Academy video training session: Fuzzy Matching for Beginners  Alteryx Academy video training session: Fuzzy Matching Intermediate Users 
View full article
The Select Tool  within the Alteryx Designer is the equivalent of your High School Sweetheart. Always there when you needed them and helped you find out more about yourself.  The Select Tool can do exactly this by showing you the data type and structure of your data, but it also gives you the flexbility  to change aspects of your dataset.
View full article
Very large numbers don't behave the way users expect. Often, it's better practice to handle a big number like that as a String.
View full article
Issue   Saving or running the workflow in the Designer causes the following error to occur:   An Unhandled Exception occurred. A previous action may not have completed successfully. Click OK to send the development team the error log so that we can fix this error in a subsequent release.   Environment   Alteryx Designer   Diagnosis    Checking the logs from %PROGAMFILES%\Alteryx\ErrorLogs\AlteryxGUI shows the following error:   Alteryx Designer x64 - 2019.2.5.62427 Type: System.ArgumentException Message: Cannot have ']]>' inside an XML CDATA block. Source: System.Xml OS Version: Microsoft Windows NT 6.2.9200.0 OS Is x64 Capable: True Selected Plugin: LockInGui.LockInSelect.LockInSelect Processor: Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz Private Memory: 380923904 -------------------------------------------- at System.Xml.XmlTextWriter.WriteCData(String text) at System.Xml.XmlElement.WriteElementTo(XmlWriter writer, XmlElement e) at System.Xml.XmlElement.WriteContentTo(XmlWriter w)   Cause   The use of "]]>" in the following tools (not an exhaustive list) causes the error: Formula tool Filter tool Report text tool R tool Python tool   Solution   Avoid using "]]>" in the tool or escape the ">" to ">".
View full article
Scenario: You have multiple fields in your data that correspond to customer addresses.  Some customers may have more than one address listed in their record.  However, you want to whittle that list to one address per customer.  That one address is the first, non-null address found when moving sequentially across a set of fields that contain address data. For our example, we have three fields of data related to addresses: Address1, Address2 and Address3.  The preferred field of data to use is Address1.  However, if Address1 does not contain data, then we’ll use the data in Address2.  If both fields of Address1 and Address2 do not have data, then we’ll use the data in Address3.  The final output should contain a single list of the addresses highlighted in yellow in Figure 1.         Figure 1: The final output will contain a single list of the data highlighted in yellow.  Method 1: Write a Conditional Statement The most common approach to this type of problem is to craft a conditional statement to select data based on a set of criteria (in this case, order).  In the case of our data, it would look something like this: IF IsNull([Address1]) AND IsNull([Address2]) THEN [Address3] ELSEIF IsNull([Address1]) THEN [Address2] ELSE [Address1] ENDIF   However, what if I had 20 fields of addresses instead of 3?  Just getting that statement with three fields took me too long to write out!  If you do have 20 fields, you might want to start typing that Formula now….   IF IsNull([Address1] AND IsNull([Address2]) AND IsNull([Address3]) AND IsNull([Address4]) AND IsNull([Address5]) AND IsNull([Address6]) AND IsNull([Address7]) AND IsNull([Address8])...   You get the idea.  And now you’re thinking, “You’re going to tell me there’s a better way, right?!?”  Well, yes...I am!     Method 2: Data Manipulation An alternative method of solving this problem is to manipulate the data using the Transpose, Filter and Sample tools.  I’ll share some advice from @RodL here: “If you want to make something…truly dynamic, then the ‘best practice’ when you are dealing with an unknown number of columns is to ‘verticalize’ the data”.  In our case, we may know the total number of columns of address data we have; what we don’t know is which column the data we want is actually in.    Following @RodL’s suggestion, we’ll ‘verticalize’ the addresses using the Transpose tool.  This stacks the addresses for each customer in order of the fields in the table (Figure 2).  We’ll use the Client ID (or Record ID, if you’ve chosen to add one) as our Key Field and the fields that contain address data as our Data Fields.     Figure 2: All address fields per Record ID (or Client ID) are stacked vertically in order of the field sequence.   Since Null values are not usable records for our purposes, we’ll use a Filter to identify all the usable data (non-Null values).  Now that our usable data is stacked vertically in order of field selection, we can Sample the first record from each Record ID (or Client ID) group.  We’ll configure the Sample tool to identify the First N (where N = 1) Records from a group (Group by Record ID or Client ID).   Figure 3: Sample the first record from every Record ID or Client ID group.      After some data clean-up with a Select tool, we're left with a column of the selected address for each of our customers:          
View full article
Scenario: You monitor the mileage of multiple trucks as they deliver shipments over the course of a week and record additional information regarding each truck in a file (Truck Metrics).  Each truck’s cumulative mileage per day is recorded in a separate file (Truck Mileage).  Your goal is to update the mileage-related fields in Truck Metrics with the values recorded in Truck Mileage.  Note: today is Tuesday so only fields for Monday and Tuesday will be updated in in the Truck Metrics file.         Methods:  Manually Select Fields to Update Whether the data is uniquely identified by a Truck Number (or Record ID) or identically ordered in both files, data from Truck Metrics (the table to be updated) and Truck Mileage (the data used for updating) can be Joined together.  Then, using the Select functionality within the Join tool, I can manually choose the fields from Truck Mileage that I want to use to replace the fields that need to be updated (Figure 1).       Figure 1: Fields from Truck Mileage (Yellow) replace fields from Truck Metrics (Blue).  Note that fields in yellow are selected while fields in blue have been deselected.  Fields that need to be included from Truck Metrics (Red) remain selected.    Fantastic!  A simple, straightforward way to update fields!  But, as any analyst knows, working with data is rarely simple or straightforward.  What if you’re dealing with 20 fields that need to be updated, not just 2?  In that case, manually selecting fields to update is not only tedious but also error-prone.  For these types of situations, I recommend a process that allows for a more dynamic approach.       'Verticalize' the Data to Dynamically Update Fields Transposing, or ‘verticalizing’ data, allows for a more dynamic workflow when you have unknowns in your processing.  In a scenario such as this one, you may have an unknown or changing number of fields that will need be updated in Truck Metrics.  Using this approach, we’ll first Transpose both data sets to configure the Field Name and its associated value in a single row (Figure 2).    Figure 2: The data is transposed from Truck Mileage and Truck Metrics.  The highlighted fields in Truck Metics indicates which fields will be updated, as the same field exists in Truck Mileage.     Then, we’ll Join our datasets based on two fields: Truck Number AND Name.  This ensures that fields in Truck Mileage will match to the correct fields in Truck Metrics, assuming the fields names in both tables are named in the same way.  The only selecting we’ll have to do is to make sure all fields from the Right data source (in this case, Truck Metrics) are deselected (Figure 3).  This allows that, in the situation of matched Truck Numbers and field names, the updated values will be used.   Figure 3: The Joined fields indicate the fields that exist in both Truck Mileage and Truck Metrics.  Fields in yellow (Truck Mileage) are selected to reflect updated values in downstream tools.  Fields in blue (Truck Metrics) are deselected.    Note that any unmatched fields have fallen out of the Right side of the Join:     To add them back into the data stream, simply Union the Center and Right Joins together, setting the tool to “Auto Configure by Name”.  Then, to rearrange the data into its original format, use the Cross Tab tool (Figure 4).  And, voila!  Your data is updated!   Figure 4: The updated fields are highlighted in yellow.  All other fields originally included in Truck Metrics are included the in the final results as well.              
View full article
Recently a couple questions came across the Customer Support desk asking how a fiscal calendar could be incorporated into a workflow. Natively Alteryx doesn’t have a tool to create one, but Alteryx does have a number of tools to make a fiscal calendar. Here is an example of how this can be done.   Simple Calendar 1. Determine the start and end dates of the fiscal calendar and enter them into Text input tool, each on an individual row   2. Connect a TS Filler tool to generate dates between the start and end dates     3. A Select Tool was added to remove unnecessary fields   4. Add a Multi-Row Formula Tool to create a Day of Week field by assigning each row a day of the week from 1 to 7     5. Add another Multi-Row Formula Tool to calculate the fiscal week. Whenever the Day of Week is 1 add a value of 1 to the previous row’s fiscal week value.  This will create a running week value for the entire year     That's it!   An example workflow is attached.  Also in example workflow is an example of how the fiscal month and week of month may be added.  (Workflow is in Alteryx 10.6 version.)
View full article
Unlike a snowflake, it is actually possible for duplicates exist when it comes to data. To distinguish whether or not a record in your data is unique or a duplicate we have an awesome tool called the Unique Tool  that will actually turn your data into a unique snowflake.
View full article
There are a couple of different whitespace situations you might get yourself into, but this article has you covered in all of them!
View full article
Have you ever wanted to do a Cross Tab, but needed the results in a particular order?  You've probably discovered that Cross Tab will dutifully change your data to column headings and keep the rest of the data correlated, but that the output columns will be arranged in alphabetical order.  It's as if there were a Select tool with the "Sort on Field Name" Option thrown into your workflow without you putting it there.    
View full article
  Be sure to first understand that different people/organizations all have different definitions on what they mean by rounding.  Once you are clear on what you need, you can use Alteryx to round your numbers.   Example 1: Divided by the place you want to round to, use CEIL to round up, then multiply back by the place:   324 to 400 CEIL([Number] / 100) * 100   Example 2: Ceiling and floor just go to the nearest integer.  The ROUND(x, multiple) function will allow you to specify the level of precision on which to round, very similar to Excel.  But you can't specify the direction, so in your example, if your did ROUND(3425.123,1000) it would return 3000.   Example 3:  Please see the attached v10.0 workflow which is more dynamic. The text input identifies the mapping for the conversion joined with length of source input - then an expression is applied in the Formula Tool.         T he final output.      More on rounding.
View full article
  Let's take a look at this data set that needs to be combined into one field for reporting purposes:     but it shouldn't end up looking like this:      That is no better for reporting purposes than the data we started with because we only want to see areas that need improvement and we don't want to just combine values separated by commas.    So how can we get it to look like this?     A formula such as this one could work:  If contains([ProdDessert], "Improve") then [ProdDessert] else "" endif + " & " + If contains([ProdSupplies], "Improve") then [ProdSupplies] else "" endif + " & " + If contains([ProdFood], "Improve") then [ProdFood]else "" endif + " & " + If contains([ProdFoodSvc], "Improve") then [ProdFoodSvc] else "" endif   But that's a lot of typing! And it will require cleaning up of "&" and spaces where they aren't needed.     So let's look at a dynamic way of doing this:    First, using a Transpose tool, the columns are turned into rows and the data is filtered for rows that contain the word "Improve":      Then, using a Summarize tool, the remaining rows are concatenated together using a space as the delimiter:     Lastly, some cleanup is done in a Formula tool:      And the data is ready for reporting!     Also, take a look at the attached workflow that walks through these steps. 
View full article
A flat file was parsed into a number of fields, including one field that needs to be converted to a numeric data type.    It looks like this in the data:  000000437 000012587   The resulting number is supposed to have four decimal points, so just applying ToNumber([Field]) won't work because it will convert  000000437 to 437 but the required output is 0.0437.     As always, there are a couple different ways of going about this:    1. Using the Substring (x, start, length) function, which returns the substring of [x] starting at [start], stopping after [length]. This way, the string can be broken into the first 5 characters and the last 4 characters and a decimal point inserted in between the two. Then, the ToNumber() function can be used as usual.    2. Using the  ToNumber (x, bIgnoreErrors, keepNulls) functions creatively. ToNumber will return 437 for the first row of data and  12587 for the second one . However, if the number is then divided by 10,000, it will return 0.0437 which is the correct result.   3. Using Regular Expression. Regex can recognize a pattern and replace it with another value. In this case, regex can find the last 4 characters of the string: (\d{4})$ and replace them with a decimal point + the pattern it matched to: \.$1           Also take a look at the attached workflow to illustrate the solutions. 
View full article
Let's say you have a dataset with products that aren't sold very often (not every month) but you would like to create data points for every month and fill in quantity and amount as zero for reporting purposes.    This is the perfect time to use the Generate Rows tool. It will allow you to take the earliest month on the dataset, increment that by one month (generating a new row each time) until it has reached the latest month or the current month you are in.     To do so, first use a Summarize Tool to identify both the min (earliest) and max (most recent) dates on your data.      In this example, we created a field called [CurrentMonth] to be used as the max date, but you could certainly use a date provided on your data set.    Then, set up the Generate Rows tool as follows:      We are creating a new field called [Date] that will hold all of the date values between the earliest date on the data set and the current month.    The Initialization Expression [Min_Date] is the starting value that will be used for the first row. In our case, it is the earliest date on the data set as identified by the summarize tool.   The Loop Expression is what is being used to generate new rows. In our case, it increments [Date] by one month until the Condition Expression is met. As long as the Condition Expression returns true, another row will be created. Once the Condition Expression returns false, no more rows will be created.   Once these rows are created, you will have to join them back to your dataset and fill in any missing columns appropriately. See the attached workflow for an example of how to do that.
View full article
  Sometimes, data will come with multiple header lines, for example, year and quarter might be broken into two rows, like this:      That is not very helpful when you are trying to analyze your data.    You could use a Select Tool and type in new names OR you could do a few transformations and use the Dynamic Rename Tool to merge the rows dynamically, so you are prepared when 2017 rolls around the corner and they start adding new columns to your file!    To fix data that looks like the above, we start with the Sample Tool to split the data into two data streams - one with just the values and one with the header information. Then, we transpose the data to let us fill in the missing years and use the Summarize Tool to concatenate the resulting field values:       Now we can use the header lookup we created as a right input into a Dynamic Rename Tool using the "Take Field Names from Right Input of Rows" option and selecting "Name" as the old field name and Concat Value as the new combined field name.    See the attached workflow for details of tool configurations. 
View full article