community
cancel
Showing results for 
Search instead for 
Did you mean: 

Alteryx Designer Knowledge Base

Definitive answers from Designer experts.
Community v19.6

Looks aren't everything... But the latest Community refresh looks darn good!

Learn More
Some time ago, there was a nice writeup: The Ultimate Alteryx Holiday gift of 2015: Read ALL Excel Macro: Part 2. This amazing macro allowed me to read any excel file, regardless of the number of tabs.   Until I start working with users that use diacritics on the sheet names.     For example:   If we try to use the mentioned macro, you will receive an error like this: ‘not a valid name’ I decided to approach this as a macro (2 levels) and use the Directory functionality to read all possible xls files within a folder.   Level 1: Read xls Tabs   This macro will read all the tabs for a single xls file. I used an R tool that includes the library readxl.   This library allows us to read xls files. I used the excel_sheets function to extract the sheet name and compile the sheet name with the name file path. You will receive a column per tab that the xls file has. I cleaned these two values and passed them as Path and Tab.     This data is sent to the Read xls file macro.     Level 2: Read xls Files   This macro gives structure to the full path (Path + Tab) using the structure needed in xls files. It uses the Dynamic Input tool to dynamically choose the data and output its content.       Note: Update the XXXX for your corresponding paths Don’t forget to install the R library readxl under your %Program Files% path g. C:\Program Files\Alteryx\R-3.5.3\library
View full article
So we’ve generated our list of Key Values (in Part 1) to be read by the Tree Interface tool and when we look at the Tree Interface it looks correct.   Figure 1   So how can we use these selected values in a workflow? As a review, our original data consists of very pretend pet data.   Figure 2   Now we need to associate the Key Values to our data so that our data can be queried.  This can easily be done by adding a couple steps to the workflow we create in Part 1  (updated workflow is attached).   By adding a Transpose tool, the table can be manipulated to get each possible Key Value for each record. Then a Select Tool to rename the Value field to Key. Those results are saved to a new file (Pet data with key values.yxdb) that will be used in the app.   Figure 3   When values are selected in a Tree, a list of the Key Values are returned with a line break between each value.  In the above Tree Interface example (Figure 1) the Key Values 111 and 26 are returned as a string data type and would look like this:    Figure 4   Setting up the App We now have all the parts we need to create the app: Pet Key values.yxdb (from Part 1) and Pet Data with Key Values.yxdb   The start of the app is a Tree Interface tool that points to the Pet Key Values.yxdb file, and an Input tool pointing to the Pet Data with Key Values.yxdb file.   Figure 5   We know the Tree Interface returns a list of Key Values, so we can then filter our pet data on the Key Values.  A Filter tool is added with a template filter that will ultimately be updated by an Action tool, once we connect the Tree Interface to the Filter tool.   Figure 6   Now let’s connect the Tree tool to the Filter tool and configure the Action tool.   Figure 7   We are going to select the Action Type ‘Update Value with Formula,’ and update the full Expression. Now for the Formula: click on the ellipsis (…) to add this formula:   '[Key] in ("' + REGEX_Replace([#1], '\n', '","') + '")'   Figure 8   This expression will build an IN statement (Figure 5) based on the selected values from the Tree Interface.  If the returned values from the Tree tool are, as in Figure 4:   111 26   The expression will build a string starting with:  [Key] in (“   The REGEX expression looks for all line breaks (or newlines) in the connection 1 ([#1]) value, denoted by the \n, and replaces those occurrences with: “,”   And then finishes the expression with: “)   The final expression looks like this: [Key] in (“111”,”26”)   This will replace our template expression when the app is run.   To test your Action tool expression, use the Interface Designer’s Debug tool.  Open the Interface Designer from the View menu in Alteryx.  This Community article goes into more detail on how to use the Debug tool.   With that, the tree tool is complete.  The Key Values from the selected Tree Interface items are used to select the desired records from the dataset for further processing.   Add a Browse tool to the True output of your Filter to view the results when the app is run. Be sure to configure your app to show the results of the Browse when the app completes. This setting can be found in the Interface Designer, under Properties.   Figure 9   As with many things in Alteryx, there is usually more than one way to accomplish a task.  In the attached App example workflow, there is an additional method to take the results from the Tree tool and query the pet dataset.   All related workflows are attached to the post and saved in Alteryx 10.0 format.
View full article
Sometimes, especially when interacting with tools/files outside Alteryx, you need to explicitly control the order of tool execution. The classic example is a process whereby you 1) INSERT to a SQL table where an auto-incrementing ID is applied 2) read the records back in order to get the auto-incremented id 3) insert the auto-incremented ID into a different table.   In Alteryx worflow streams that terminate (as in an Output or Render), you can't reliably use Block Until Done to control the stream. One way to reliably control the order of events is with batch macros.   Batch macros guarantee that 1) every iteration is complete before the next iteration begins, and 2) that all iterations are complete before the macro reports itself complete. So if you place a process inside a batch macro and create an output stream from that macro, all processing will be done and the macro still can exist in the middle a workflow stream. You don't even need a Block Until Done as part of the process.   Please see the attached examples
View full article
In SQL, you can join on a range, using code similar to the below snippet.   SELECT Column, FruitName, StartDateTime, EndDateTime FROM dbo.Fruit_List INNER JOIN dbo.Greek_Alphabet ON dbo.DateTime BETWEEN dbo.FruitList_StartDateTime AND dbo.FruitList_EndDateTime   Alteryx does not have a "join on range" command, but you can get the same result by appending all records and using a filter tool.                                                                                                                                         For Advanced Joins: when a value from one file is between (< or >) a value from another file, visit and download the Advanced Join Macro.
View full article
Calgary  is a list count data retrieval engine designed to perform analyses on large scale databases containing millions of records. One use case for a Calgary database in an  App or Macro  is to offer users certain predetermined criteria (e.g. geography) to select and then return the corresponding data to them. A back-end process can combine multiple data sources, format and pre-filter as needed, and load them into a Calgary database that can be queried quickly by the  app  or  macro . This way, the bulk of the processing happens outside of the app or macro allowing data to be returned more quickly. 
View full article
Sometimes large amounts of data can 'overwhelm' a tool or process in your workflow and make it appear like it is stuck or frozen.  This was the case recently when a user attempted to pass 7 million rows of data to a CASS tool .  Splitting these records into smaller chunks makes the process run much quicker. 
View full article
In this posting, we'll take a look at Excel functions that return today's date and current time.  Then we'll see how to use Alteryx to do the same thing.  We'll take this a step further and show how Alteryx can be used to return a large number of date-related information for any date using macros and apps.   To get today's date in Excel, you use the =TODAY() function.     And the =NOW() function will return today's date and time.     You can format date and time the way you'd like (eg. Nov-10 or November 10, 2016 instead of 11/10/2016).   Alteryx also has a couple of ways to get today's date and time.  The first is macro available in the In/Out toolset called 'Data Time Now'.  The tool's configuration provides many options for how you'd like to see the data, including date as well as date and time.      The other method is to use a tool (such as the Formula Tool) where an expression can be used with the function 'DateTimeNow()':     Results in:     Use the same method if you want just the date or time.   Just date:     Just time:     There is similar function called 'DateTimeToday()' which will return the current data as of midnight (so the time comes back as 00:00:00).   What if you want information about a date other than today, however?  I've written about calendar and date aggregation before and have made a calendar macro available for anyone to use.  If you have a date in yyyy-mm-dd format, you join it to the Date field in the macro which returns the following fields:   Date: yyyy-mm-dd format; includes every day beginning 2000-01-01 through 2099-12-31. Year: yyyy format. Quarter: numeric representation of quarter (1, 2, 3 or 4 rather than Q1, Q2, Q3 or Q4). Month: numeric representation of month; NO leading zeros. MonthName: January, February, March, April, May, June, July, August, September, October, November and December; completely spelled out rather than an abbreviation. WeekNumber: numeric representation of week; generally values range from 1–52, but occasionally a year will have a week 53; weeks 1 and 52 (or 53) may be partial weeks (i.e. less than seven days). Day: corresponds to the calendar date with values from 1-31 (for months with 31 days); NO leading zeros. DayName: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday; spelled out rather than abbreviations; Sunday is the beginning of the week. DayYear: day of year; values range from 1-365 except for leap years which have a day 366. DayQuarter: day of quarter; values range from 1-92. DayWeek: numeric representation of week where 1=Sunday, 2=Monday, 3=Tuesday, 4=Wednesday, 5=Thursday, 6=Friday and 7=Saturday. Week StartDate: date in yyyy-mm-dd format and data type = Date; week begins on Sunday with the possible exception of week 1. Week EndDate: date in yyyy-mm-dd format and data type = Date; week ends on Saturday with the possible exception of week 52 (or 53). Month StartDate: date in yyyy-mm-dd format and data type = Date. Month EndDate: date in yyyy-mm-dd format and data type = Date. Quarter StartDate: date in yyyy-mm-dd format and data type = Date. Quarter EndDate: date in yyyy-mm-dd format and data type = Date.   I've taken this a step further and created an app with the calendar macro embedded in it which allows a user to select a date and the fields they want returned at run time.       I've made a couple of version of the Calendar macro; one where the week begins on a Sunday and the other where the week begins on Monday.  In the attached app, the macro where the week begins on a Sunday is used but can be easily replaced by the one beginning on Monday.  
View full article
  This article is part of the CS Macro Development Series. The goal of this series is to communicate tips, tricks, and the thought process that goes into developing good, dynamic macros.   Many macros need to be especially dynamic to allow the user to select which field to use in a very complex process. Take the Make Grid tool for example. It asks the user to specify a spatial object field, and only carries that field through its calculations. What comes out of the tool are two new fields, GridName and Grid, and none of the original fields at all.     I set out to build a macro just like this tool, except to generate a hexagonal grid. I started by building a normal workflow that could do this process, and when I was ready to convert it to a macro, I realized that I wasn't sure of the best way to enable it to choose this field dynamically.   There are two main ways to get data into your macro. Here's a quick summary of how they work:   Field Maps   The Macro Input tool has a checkbox in its configuration that reads Show Field Map.     If this is unchecked, then your macro won't do anything with the data - it will just stream in everything as is and trust that the stuff inside knows how to account for anything you throw at it.   If it is checked, then your macro will create drop down menus in its configuration window that will ask for the fields you have present in the Template Input. These drop down menus will let you select which fields to stream in to the macro in place of the ones in its template.     The field map needs all those drop downs to be filled out for it to do its thing, but if you want to make one of these inputs optional, just add (Optional) to the field name in your macro template.     Advantages: 1.  Easy to set up! One checkbox and your template are all that's needed. 2.  Makes sure only mapped fields enter the macro. This is good when converting a workflow to a macro because you don't need to worry about every form the input data stream could be in. If your stream has other fields, they will get tacked on to the stuff coming out of the macro.   Drop Down   Drop Down menus are an alternative way you can bring fields into your macro that offers a bit more control of the process. They're particularly useful when connected to the   anchor of a Macro Input tool.     You can then update a Select tool with the selected field to choose which field is being passed along.   Advantages: 1.  Allows you to specify which fields to show to the user from a list of field types. (In this example, I am only allowing spatial objects.) 2.  You can have a default selection populate the interface. (Here we will have any field starting with "SpatialObj" get selected automatically in the configuration of the macro.) 3.  If you want something to be optional, you can use the [None] option.   "The Select Tool Trick"   If you make use of the Drop Down tool to bring in your data you'll need to update a Select tool. Here's a little trick that will make converting your workflows a lot easier.     First you'll want to uncheck *Unknown in the Select tool, since this will bring in every field not explicitly unchecked here. Then, have only the field you're selecting for checked, and navigate over to your Action tool and point it at the selected field.       Instead of having this repeated for every tool using this field, just have the field renamed in the Select tool, and refer to it by that name in all your downstream fields.   This turned out to be just what I needed for the Make Hex Grid macro, where I have a ton of stuff happening downstream and I only wanted one field to get through my Select tool.     Check out the example for a simplified version of this.
View full article
It's not uncommon to have a situation where you need to conditionally join one dataset to another.  Perhaps the most common is when you want to join one file to another where a date from the first file is between, greater than or less than a date(s) on a second file.  The default tools found in the Join section of the tool palette don't provide a simple way of doing this (it can be done, but you need to string several tools together to make it work.  There is a better way!  Read on...).    There is great macro available in the public Alteryx gallery called Advanced Join (find it here, but spoiler alert... you can download the attached workflow which includes this macro so you don't have to go to the gallery to get it).  The Advanced Join gives you greater latitude than the Join tool.  Most notably, you can select records from file A that are unique to A AND intersect with file B.  Now you may be thinking, “I can do that by unioning the records from an inner join with records from a left join,” and you would be correct.  But it takes two tools to do what one Advance Join does.  More importantly, the Advanced Join allows you to put a conditional statement on your join which is something you can't do with the Join tool.  And it’s this feature - the ability to use conditional statements in a join - which we will focus on for our purpose here.   Let's get into some examples.  I have a file, 'Fruit List’, which contains data about various fruits.  This file contains a Column Id, a Fruit Name, a Start DateTime and an End DateTime:     I have a second file, 'Greek Alphabet’, which contains a Column Id, a Greek letter and a Datetime.       I want to join the two files on ColumnId where the Datetime from Greek Alphabet (file B) is BETWEEN Start Datetime and End Datetime from Fruit List (file A).  Here's the workflow and a screenshot of how to configure the Advanced Join:         And here are what my results look like:       Only one record from Greek Alphabet matched one from Fruit List on ColumnId where Greek Alphabet's Datetime was between Fruit List's Start Datetime and End Datetime.   In the next example, I have the same Fruit List file and want to join it another file, Greek Alphabet that contains just one datetime filed:       The first thing to note is both files have a field called 'DateTime.'  We'll want to give these unique names to avoid ambiguity when we write our conditional state in the Advance Join configuration.              I want to join both files on ColumnId but only when DateTime from Fruit List is LESS THAN DateTime from Greek Alphabet:         And the results...:       Let's look at one last example.  This time, I'm going to use the Fruit List and Greek Alphabet files used in the first example (Fruit List has a Start DateTime and an End DateTime).  I'm interested in matching records where DateTime from Greek Alphabet is BETWEEN Start Datetime and End DateTime from Fruit List.  I'm not matching on ColumnId this time.     For the Advanced Join configuration, I'm going to cross join my files.  (CAUTION: the resulting join could contain as many rows as the product of the number of rows of the incoming datasets - a.k.a. Cartesian join - depending on how restrictive your conditional is.  This means if you're joining 2 datasets that contain a million records each, the resulting dataset could contain as many as one trillion records! ).  If I had wanted to match on ColumnId, I would have had to do that separately using a Join tool. The cross join option only allows you to apply a conditional statement:       Results from our 3rd example:     Notice how 10 records from Greek Alphabet were joined to just one record from Fruit List.     The Advanced Join tool can save you time and a lot of headaches when you want to join files using a conditional statement.  It has some limitations - you can only join two datasets and include one conditional statement per tool, cross join limitation mentioned above - but Advanced Join provides greater capability and flexibility than the standard Join tool.
View full article
Date aggregation has been discussed before and macros have been developed to assist with aggregating dates at various levels (e.g. month, week or year). Some of these macros even allow you to find the sum, mean, median or mode dates. See an example here.   Yet I’ve found the data to be incomplete or requiring additional processing to get the data the way I wanted it. Rather than copying and pasting this process from workflow to workflow, I decided to create a macro (and this macro doesn’t require any configuration!) and make it available to a wider audience with the hope that it will save time and energy by eliminating the need to recreate the process of translating a date week number, quarter, etc.   Data Format and Layout:   Essentially, it works like an input file. For all practical purposes, you can think of it as a date table with Date being the primary key.         As long as you have a date in yyyy-mm-dd format in another file, you can match to this calendar table.   Field Definitions & Notes:   Date: yyyy-mm-dd format; includes every day beginning 2000-01-01 through 2099-12-31. Year: yyyy format. Quarter: numeric representation of quarter (1, 2, 3 or 4 rather than Q1, Q2, Q3 or Q4). Month: numeric representation of month; NO leading zeros. MonthName: January, February, March, April, May, June, July, August, September, October, November and December; completely spelled out rather than an abbreviation. WeekNumber: numeric representation of week; generally values range from 1–52, but occasionally a year will have a week 53; weeks 1 and 52 (or 53) may be partial weeks (i.e. less than seven days). Day: corresponds to the calendar date with values from 1-31 (for months with 31 days); NO leading zeros. DayName: Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday; spelled out rather than abbreviations; Sunday is the beginning of the week. DayYear: day of year; values range from 1-365 except for leap years which have a day 366. DayQuarter: day of quarter; values range from 1-92. DayWeek: numeric representation of week where 1=Sunday, 2=Monday, 3=Tuesday, 4=Wednesday, 5=Thursday, 6=Friday and 7=Saturday. Week StartDate: date in yyyy-mm-dd format and data type = Date; week begins on Sunday with the possible exception of week 1. Week EndDate: date in yyyy-mm-dd format and data type = Date; week ends on Saturday with the possible exception of week 52 (or 53). Month StartDate: date in yyyy-mm-dd format and data type = Date. Month EndDate: date in yyyy-mm-dd format and data type = Date. Quarter StartDate: date in yyyy-mm-dd format and data type = Date. Quarter EndDate: date in yyyy-mm-dd format and data type = Date.   Common Use: The attached macro/date table provides a quick and easy way to access date data you would otherwise need to calculate or decipher. Let’s say you have a file with retail transactional data. On the file, you have a date and line-item revenues, but you want to determine how much revenue was generated by week. Ordinarily, you would translate your date into a corresponding week number before you could summarize your revenue data.   Missing Leading Zeros: Numeric fields (specifically, Byte and Int16 from the data format above) will not have a leading zero. In the event you need to add the leading zeros back in (the most common would be Month and Day fields), you need to change the data type to String using a Select tool and then use the following expression in a Formula tool:   PadLeft([FieldName], 2, '0')   This ensures the field has two characters. If it only has one, a zero will be added to the left (e.g. 1 becomes 01).   A workflow illustrating this process is attached. The macro is included in the package.
View full article
You probably already know that you can output results to multiple sheets of an Excel file.  If not, you should check out  our resource  that explains how to do that very thing.  But what if you run that workflow every day, and you want to keep the outputs from days past?
View full article
This article is part of the CS Macro Development Series. The goal of this series is to communicate tips, tricks, and the thought process that goes into developing good, dynamic macros.   Suppose you have a dataset that will pass through a macro if a condition is true, and an entirely different macro if a condition is false. In the event the condition is true for all records, no records will be sent to the false-side macro. The false-side macro is expecting data and throws an error whenever it doesn't find any. Similarly, in a case where the condition is false for all records, no records will sent to the true-side macro and errors. You need a process that will bypass the macros whenever data is unavailable. Let’s see how you can do that.   First, create data senarious where all conditions can be tested. In the first situation, I assigned a field, Sum_Test, with a value of 1 to half of the records and a value 0 to the other half. A second situation assigns a value of 1 to all records. A third situation assigns a value of 0 to all records. Setting up these situations will allow me to test all possibilities.     The next step is to filter by condition.  Depending which data senario from above you use, data may not exist if the condition is true or false.  No data means your workflow will fail.  We need a work-around so that doesn't happen.        What happens after the data goes down the true side or the false side is essentially the same in terms of process.  The batch macro below is found downstream from the true side as well as the false side. Functionally, they work the same.  The batch macro determines if data is available.  If there is, data is detoured to the formula tool.  If not, the detour will bypass the formula tool entirely and keep the workflow from throwing a error.    NOTE: the formula tool in the illustration below is a representation of any process that requires data if an error is to be avoided.  This could anything, usually another macro.  For the purpose of this illustration, I'm simply showing a single tool.       What follows are instructions for how each tool in the batch macro/detour combination are configured.    First, enter ‘Sum_Test’ as the label for the Control Parameter.     Then write an expression in the Condition tool that checks if ‘Sum_Test’ is null.       If ‘Sum_Test’ is null (True), then direct the detour tool to go to the right in the ‘T’ side action box.     Similarly, if ‘Sum_Test’ is not null (False), then direct the detour tool to go to the left in the ‘F’ side action box.     Connect both action boxes to a Detour tool. The Detour tool has no configuration. Every Detour must be stopped by a Detour End tool.     Build a similar batch macro for the false side of Filter tool. Union the results from the true and false sides.     I put a Frequency Table tool after the Union to verify the results.   The entire process looks like this:     Disconnect the input to the filter tool and connect a new test condition to test all the various conditions.
View full article
This question is highly relevant when looking to circumvent Alteryx error's when you have hardcoded a certain workflow process to expect certain field names. So when you are trying to process multiple files with different field schema's you don't want Alteryx to stop processing all the files if one file fails. You ideally want Alteryx to skip the file and carry on!   Attached is a sample workflow that can help you answer the above question:   The Workflow   The workflow will start with a Directory tool browsing to the folder location that has all your input files  As this example uses xlsx files, we use a Formula tool to append the sheet name to the full path We then feed this data into a Batch Macro The Macro The batch macro allows us to process one file at a time but each file goes through the same process The input file within the macro is replaced by the incoming file path We then use a RecordID tool to allow us to keep the data integrity when we transform the data for the filtering process downstream In the Transpose tool, we use the RecordID tool as a key field to pivot on. We then transpose the rest of the data fields to create a NAME (This will have your field headers) and VALUE (This will have your data) fields. This part will be dynamic if new fields get added because we have the 'Dynamic or unknown fields' checked and they will fall within the NAME field which will reference in the filter tool Within the Filter tool, you can now add in the field variables you need for your workflow On the true side of the filter, you will now have the fields and values which met your criteria and on the False, you will have the fields which did not The ultimate goal though is to bring back the whole dataset if that file had fields which met your criteria To do this we use the Formula Tool to add in the file path of the file which we can use outside of the macro to bring together the whole dataset The final tool is the Crosstab Tool to orientate the data back into its original format using the RecordID Tool as the group by field.  Save the macro (File>>Save As) The Workflow Insert the macro into the workflow and connect to the Formula Tool. In the configuration of the macro choose 'Full path" from the dropdown. This will update the input tool and the Formula Tools The two outputs on the macro refer to the true and false side of the filter. You can now use a Join tool and connect the true and false to the left an right inputs of the Join Tool The field you will join on will be the full path and RecordID Now if the file met your condition in the filter it should have values on the left and right of the macro outputs. Therefore, in the 'J' node of the Join tool, you should see the data from your successful file In the 'R' you should see all the data from the files which did not meet your condition as they don't have anything to join to on the left 'true' side of the filter You can then paste your desired workflow to the 'J' output of the join tool and continue your data process This will now only allow files with the desired field headers to pass through and you have circumvented your workflow from breaking if the incorrect field schema from certain files is passed through.   S/O to Shaan Mistry who brainstormed this workflow with me. 
View full article
This article is part of the Client Services Macro Development Series. The goal of this series is to communicate tips, tricks, and the thought process that goes into developing good, dynamic macros.
View full article
Are you tired of your boring old workflow? Just sitting there in your Designer, slacking off?
View full article
Duplicated macros can often occur when you import macros within workflows and when trying to install different versions of the same macro. Fortunately, there are a few steps you can take to clean up your tool palette:   Downloading macros from the Public Gallery:   Often if the macros are supported by Alteryx Products they will be installed into a particular parent category which is specified within the meta info of the tool. If you installing a new version some will delete the previous version. However, the installer may not have this functionality. Therefore, you can check these folders to remove the duplicate macros here:   Admin install of Alteryx : C:\Program Files\Alteryx\bin\RuntimeData\Macros (relative to where you installed Alteryx) Non-Admin install of Alteryx: C:\Users\[Your User Profile]\AppData\Local\Alteryx\bin\RuntimeData\Macros   You will want to delete both the .yxmc and the .bak files from that folder. If this is a macro unsupported by Alteryx Products you will have to delete any duplicates from the file folder you save it into.    Macro installed from a zipped Alteryx workflow (.yxzp): You can specify where you want the workflow to be saved, therefore if a macro is packaged within the workflow it will reside within one of the externals folders in that folder location. To avoid duplicate macros please save in a new folder location. As a default, this should be the behavior of the Alteryx Designer & Server. .\Alteryx Example Modules\Macro - Iterative\IterativeDemo\_externals\1 Sent a macro (.yxmc) in an email: If this macro already exists in a folder on your machine then you can choose to overwrite that same macro or delete the previous version before saving it.
View full article
Business Problem: When creating custom Tree questions in Alteryx, the Module or Macro designer will typically use the “Custom File/Database” mode of the tool. This is a great option because of the ease with which the data can be arranged to populate a great Tree question. Handling the Action side of a Tree question can sometimes be a little challenging, and the goal of this macro is to streamline that process.   Overview: Consider for a moment the following example where the user has selected “Blue Whale” from the Tree. Would you like just the record for “Blue Whale”, or would you like the parent records from that selection? Depending on your choice in the Tree Processing Macro, you can receive the following outputs from the tool.   What if we had selected “Whales”, how would you like to handle the child records from that selection? The Tree Processing Macro makes it is easy, as it will return all of child records for you automatically.   The macro is designed to reside in the .yxwz from which the user will interact with the Tree Question. The same file that is referenced in the Tree Question should also be present as an Input tool. This Input tool will populate the connection to the “D” (Data) input of the Tree Action Macro. The “T” (Tree) connection will be populated by a connection from a Text Input tool. As every Question has an accompanying Action, your .yxwz's Action will update the Text Input tool.         The Tree Action Macro has two configuration settings; the first defines whether you wish to exclude branches as records in the output. The second setting allows you to define the key length. Apart from these settings, the Tree Action Macro will take care of the rest.   I have provided a sample Analytic App that details the use of the Tree Action Macro, and the difference between including branches as records and excluding them.
View full article
Filtering data is probably one of the simplest Alteryx functions, but it can become time consuming when building the expression, especially when filtering on a larger number of values. Wouldn't it be nice to be able to feed a list of values to Filter your data? I'll show you how you can do this with a simple batch macro.   Batch Macro: Setup the Filter tool within your Batch macro with a simple "In" statement, then use the Control Parameter to update the Value within the expression.     Connect the Control Parameter to the Filter tool, configure to update the Value within the expression as shown below, this will update that portion of the expression with the column the user defines in the macro.     Building the Expression: In a new workflow, Input the list of values you would like to use to Filter your data.     Connect your data to a Summarize tool, configure the Summarize tool to Concatenate your values into a single column using "," as the Separator and " as the Start and End.   Drag out a Formula tool in front of the Summarize tool, then lets add the  parenthesis as show below to complete the expression.     Filter:   Add your batch macro into the workflow by right clicking on the canvas and selecting Macro, you will need to browse to the location where you saved. Then connect the Formula tool to the "¿" of the Batch Macro. Input the data file you would like to filter and  connect to the remaining Input of the Batch Macro.     Finally, click on the Control Parameter tab and select the column to be used as the Control Parameter. We are not using the Group By function for this particular example, therefore there is no need to configure.     See attachment.
View full article
Business Problem: Thematic maps are often used to display data geographically with colored or shaded themes, but sometimes users wish to see the data differently. For this purpose, dot density mapping has become a frequently requested feature for map rendering in Alteryx. Dot density creation is possible with the inclusion of the spatial function within the formula tool. This function, ST_Random Point, will randomly disperse a point within a given polygon. Utilizing this tool, anyone can create a macro to produce the data required to generate a dot density map. Actionable Results: Easily create dot density thematic maps Overview: It can often be convenient to view thematic maps as clustered points. This type of visual output is a logical and accurate representation of data occurring in a non-continuous distribution. Vertical: Any Key Tools Used: Formula Tool (ST_Random Point spatial function), Generate Rows Required Input: As inputs, the Dot Density macro requires two fields: geography with an associated value and a configuration of the number of dots per value. Determining the appropriate number of dots per value may require some trial and error to produce desirable results. Knowing the min, max, and median values associated with the base geographies would help you to determine and optimal dots per value. This coupled with the size of dots on the map will greatly affect the aesthetic of the mapping.  
View full article
Recently, I was asked if it was possible to have a question included in a macro that would not show to the user in the configuration window.   In this case, the user was trying to create a random seed value to pass into the Create Samples tool.  The way he had it worked out was to create the random seed value in a formula tool and then pass that into a Drop Down question to update the Create Samples tool.   This method worked just fine, except it showed an extra question in his macro configuration that he didn’t want the end user to see since they couldn’t change anything.   As it turns out, you can use an Action tool without a question to achieve this task. Simply connect the Action tool (in the Interface category of your palette) to the desired tool - in this case, the Create Samples tool -- and configure it as planned.   In the attached sample, the macro takes some dummy input and creates a random seed value. The macro itself when run as a workflow won’t return any results, but you can open the Interface Designer and select “Open Debug” to view the random seed value change.   Workflow and Action tool Configuration:          As viewed in Interface Designer - Open Debug:                             Download the attached workflow and try for yourself!
View full article