community
cancel
Showing results for 
Search instead for 
Did you mean: 

Alteryx Knowledge Base

Definitive answers from Designer experts.
 You are using an unsupported browser for translation. Please switch to another browser.

This little job shows how to connect Alteryx with CartodB.  It uses the  Download Tool   to pass to CartodB an SQL instruction for inserting  additional data within an existing table.
View full article
Is there a  way to avoid the Cross Tab Tool from transferring all the input information alphabetically? Simply add a RecordID to your records and add the RecordID field as a grouping field in your Cross Tab Tool to keep the order!
View full article
The Field Info Tool  is another one of the gems hidden in the Developer Tool Category  – however don’t be intimidated, this is a tool for all of us! The purpose of the Field Info Tool is to give you the information about the fields in your data in a way that you can use down-stream as part of your workflow. There are no settings to configure, so just drop it on your canvas and you’re good to go!
View full article
  The Excel number for a modern date is most easily calculated as the number of days since 12/30/1899 on the Gregorian calendar. Attached please find a v10.0 workflow that converts a double (Excel date) to a Datetime in Alteryx.   It uses this function in a Formula Tool: DateTimeAdd([Datebase],[Field1],"days")   where Datebase = a datetime value set to "1899-12-30 00:00:00" and Field1 is double value = 42397  
View full article
Question Can an excel template be updated if the incoming information is not in a block form?  Say I want to fill in a form from an existing list into something like this template.       Answer This is a question we get from time to time. There are a few options on how to do this, the below provided by
View full article
Your data analyses may call for selecting records from within a group or category.  To isolate the data you want, try using the “grouping” functions embedded in some of the tools in Alteryx Designer, namely the Summarize, Sample and Sort Tools.   In this scenario, let’s you say you have data from an online home goods store.  Your data is grouped by category (Bedding, Bath, Kitchen, Dining and Furniture).  Each individual product's record contains a SKU, the unit price and the quantity sold over a given time period.  For your analysis, you have been tasked with 1) selecting the first product from each category and 2) the most/least expensive items and those that have the largest/least quantity sold.  Follow along using the attached workflow as a reference.   Option 1: Select by First/Last Position Summarize: To select the first (or last) record from a category of data with the Summarize Tool, group the data into categories with the “Group By” process on the field [Type]. Then, select the first (or last) record in the data stream associated with that group using the First or Last process.   Sample: The Sample Tool’s configuration includes an option to sample data by Group. Sample the last (or first) record from each category while also grouping by [Type].    Option 2: Select by Maximum/Minimum Value Summarize: To identify the maximum or minimum values of a numeric field, “Group By” the category [Type]. Then, select the Minimum or Maximum value.  For example, to find the most expensive item in each category of the data, group the data by [Type] and select the Maximum [Price].   Sort and Sample: Alternatively, sorting the data before sampling records will order the records in ascending or descending order. By utilizing the “Group By” functionality of the Sample tool, the greatest (or least) number of items sold per category can be identified.  Note, the Sample tool is not limited to selecting the one maximum or minimum values per category [Type].  By changing the value of n, in this case to 5, the five most popular items sold per [Type] are extracted from the data.       *Attached workflow has been modified for compatibility with Alteryx version 10.0 and above
View full article
Question Does Alteryx support web crawling? Answer Yes. In Alteryx you can look at a web page, find embedded links (e.g. using regular expressions), and add to a queue of "links to visit". Then continue visiting/adding indefinitely, while also extracting various other tidbits of interest from each page visited.   In a Text Input Tool, enter URLs to crawl.  Alteryx can take the URLs from a data stream (a database where we have all of the URLs we want to crawl) and iteratively repeat the process of connecting and getting the code beneath that URL:       Use the Download Tool and point it to a web address:      Alteryx returns the whole content available for that URL:     The attached v10.0 workflow allows you to connect to wikipedia and "crawl" the content of that URL. It can be saved, parsed etc. Additional functionality may be added to create a very powerful crawling engine.
View full article
Rather than develop complex processes to calculate cumulative totals, the Running Total tool can help you compute these calculations quickly and easily.  What’s more is that this tool is not only useful for calculating accumulation in a dataset, but it has also proved effective in calculating running reductions, or withdrawals, in datasets as well.    To provide some context around the use of the Running Total tool, I’ve included a workflow with this article that I’ll refer to throughout this piece.  In this particular example, items from a warehouse need to be distributed to different stores.  The stock of each item is prioritized at each store.  For added color, let’s pretend that the National Hockey League (NHL) is planning on where to stock Stanley Cup paraphernalia.  The teams in the face-off?  The Chicago Blackhawks and the Carolina Hurricanes (a fan can dream!).  The NHL has ten items of swag to stock in 26 different stores across the country.  Different items, however, are stocked in different stores.  Item 1, Stanley Cup T-Shirts, will be stocked with top priority given to the stores in Chicago and Fayetteville, NC.  The stores in Seattle and Houston are of the lowest priority; if there’s not enough stock, then these stores will not receive any T-Shirts.  Follow along with the workflow in Alteryx as I run (no pun intended!) through the two uses of the Running Total tool for this example.   Running Forwards   The Running Total tool is probably best known and most used to calculate cumulative sums in a field.  Moving row by row, the tool adds the value of a cell to the sum of those above it.  This tool can be configured to create running totals for groups or categories of data for one or multiple fields, which is useful for consolidating the number of calculations to include in the workflow.  I use these capabilities in the “Running Forward” calculation in the blue box in the workflow.  For this particular case of “Running Forward”, or accumulation, I calculate the running totals of each item that a warehouse supplies for two fields: the Total Required for the current shipment and the Total Amount Sold Last Quarter (see blue box in workflow).  This type of calculation could be useful for the supplier to compare the differences in stock needs in the current shipment and the previous quarter.  This type of information can be analyzed in a variety of ways, but I chose to insert a Multi-Row Formula and Summarize tool to investigate the percent differences per store per item in the number of items to be stocked and how many previously sold.   Running Backwards   While the Running Total tool is an obvious choice for calculating accumulation, I’ve also used it as a component for calculating totals that “run backwards” to answer questions of when demand surpasses the amount of something in supply.  Now, you’re probably scratching your head and thinking “Wait, this is the exact opposite of what the Running Total tool does!”  Well, you’re right!  But, using the Running Total tool in conjunction with a Multi-Row Formula tool provides the perfect reverse engineering scheme to accomplish “running reductions”.    The green box in the attached workflow demonstrates how the running total of the number of each item can be used as a withdrawal from the total amount available of each item.  The cumulative totals generated by the Running Total tool are used as a sort of test against the total number of an available item.  As an example of this process, look at the data coming out of the Running Total tool for Stanley Cup T-Shirts (Item 1).  Does the cumulative total of the number of required T-Shirts exceed the number of T-Shirts in stock?  To answer that question for T-Shirts and the rest of the items, I apply that logic with a conditional statement in a Multi-Row Formula.  Stores whose required stock needs are not met are revealed after using a Filter tool.  Looks like everyone gets the number of T-Shirts they need, but some stores will not receive their required number of Items 6, 7, 8 and 9.  If those are the Stanley Cup parkas, my guess is that our friends in Texas and Florida won’t miss them too much, anyway.   So, where can you find this time-saver of a tool?  The Running Total tool is located under the Transformation tool group and is best described using @danielkresina’s observation: it’s the tool whose icon has a guy running on it.  Configure the tool on your canvas, run the workflow, and celebrate the ease of calculating running totals.  Happy Data Blending!  
View full article
Question If I have a list of dates, how can I find the date of the next Monday for each date? Answer Using a simple workflow, you can calculate the next Monday from any date by using a single Formula tool and configuring as follows: Determine the day of each date Day: DateTimeFormat([Sample Dates],"%a") Calculate the number of days to get to the next Monday based on each day AddDays: Switch([Day],Null(),'Mon',7,'Tue',6,'Wed',5,'Thu',4,'Fri',3,'Sat',2,'Sun',1) Add the number of days (from step 2) to get to the next Monday to each date Monday: DateTimeAdd([Sample Dates],[AddDays],"days" Verify that new date is Monday VerifyNewDay: DateTimeFormat([Monday],"%a") You can actually do all of this within one formula, save for verifying the day, if you want to get fancy: Monday: DateTimeAdd([Sample Dates], Switch(DateTimeFormat([Sample Dates],"%a"),Null(),'Mon',7,'Tue',6,'Wed',5,'Thu',4,'Fri',3,'Sat',2,'Sun',1),"days")   Things to consider: This workflow assumes that your dates are already in the Alteryx native date format of "YYYY-MM-DD". If they aren't, please visit another Knowledge Base article, Date Conversions, for tips on how to get your dates there!   This was done in 10.1. Sample workflow attached.   Thanks for tuning in!
View full article
Here in Customer Support, we often get asked from new users how they can add total sums for every column onto the bottom of their datasets. There are several ways to do this. An easy suggestion is using the Summarize and Union tools to sum the rows and add those sums back into the bottom of that dataset. Sounds simple right? An even easier suggestion is where you show clients the lovely “Add Totals” macro that is available from the CReW Macro pack, which works for any sized dataset.   However, there is one limitation to this macro and that is its ability to add a word like “Total” or ”Grand Total” to the string field right before the sums of your rows. This is obviously useful for those making reports and tables in Excel.       Attached to this article is a workflow I created (version 10.1) that shows the functionality of this macro as well as how to add the word “Totals” before the summations. Also you can see how I created row totals with and without using the CReW macro. Below I’ll explain how I go about adding this row if you are using the CReW macro. I suggest you go ahead and download the attached workflow to explore how to do this without using the macro.       First, place a RecordID tool before you use the “Add Totals” macro. Add a sample tool in one stream out of the macro to get the last record.       Use a Multi-Field Formula tool to replace all Null values in string fields with the word “Total”. In a separate stream, add a filter tool and filter out where all instances of RecordID are Null since Add Totals macro won’t sum this stream up if you unchecked it in the macro configuration.   Lastly remove the RecordID’s using a select tool and Union the data back together using the default option: “Auto Config by Name”. This should work for datasets of all sizes, therefore this is a useful trick to use when creating a macros that will be used for outputting report tables.        
View full article
  The Task I remember vividly many years ago when I was tasked by my former boss at a previous company to create trade area maps for each one of our office supply stores. At the time, we had roughly 1,200 stores. I only had MapInfo to work with and no programming skills to fall back on. Needless to say, I was a little distraught.    It didn't take me long to realize that this feat, as far as I was concerned, could not and would not be completed manually. So, a crash course in MapBasic programming was in order. After a lot of studying, a fair amount of pain and hours upon hours of struggle, I was able to create a MapBasic program that would produce these maps in an automated fashion.   Fast forward 4 years and something amazing happened – I was introduced to Alteryx!   After significantly less studying, pain and struggling, I was able to create a workflow which could to do the same thing, except this time produce three map views for each store. Not only was I able to create the workflow without programming, saving me countless hours, Alteryx produced each map in significantly less time. So, I would run the workflow before leaving for the day and come back in the morning with the nearly 4,000 PDFs waiting for me! We then decided to produce two additional thematics for each store, so now we are talking nearly 12,000 maps. You can only imagine the time savings!   How is this done? The attached workflow (shown below - created in 10.1) is the basis for doing just that. The Grouping function within the Report Map tool (also shown below) is all that is required to create this automation. In this workflow, I want to create a map for each store with each store’s 15 minute drive time and customers. I also want all of the competitors to show up no matter which store is shown. In order to achieve this, I have simply configured the workflow so that the store number is in the Grouping Field for the first three layers previously mentioned, and left the Grouping Field blank for the competitors layer.         Want a separate file for each map? In the Render Tool (illustration below): Make sure to choose a specific output file type in the Output Mode Choose a name and location for the files Check the 'Group Data into Separate Reports' box Select the field to group on (typically 'Group') Choose how you want to modify the file name for each map     As a result, you will see a file created for each map:       Requirements: Each layer must have a common field (Store Number in this case). The Field type of the common field to group by must be the exact same (Int16 in this case) - No mixing of Int16 with Int64 or V_String with String, for instance. Things to Consider: Only one object from each of the layers that you are grouping by will show up on each map (unless you have multiple records with the same ID). So for example, other stores will not show up on the same map, even if they are within the map view. You can add these in as a seperate layer and not group it if you want those within the map view to show up on each map. Zoom to layer – Select appropriate layer to zoom to in each map (drive time boundary in this case). A field called ‘Group’ will flow through the workflow after the Report Map tool, which you can use for map titles, for example.   Please feel free to comment or ask questions, and thanks for reading!
View full article
  One of the great things about Alteryx is the ability to have multiple geography types (points, lines, regions) all contained within the same record.     However, when exporting map layers to MapInfo Tab file format, a decision has to be made of which spatial field to keep.  What if you want all of them?!     Not to worry! The Transpose tool will save the day!     To configure the Transpose tool all fields will need to be selected under the Key Fields except for the spatial fields, they will be selected under the Data Fields.       Just note that there are now 2 fields for the radius. RadiusSize will correspond to the first trade area (row 2) and RadiusSize2 will correspond to the second trade area (row 3).   Now, more importantly all the spatial objects exist in one field (the Value field) and can now easily be exported to MapInfo.      Note: this methodology will also work for ESRI shapefiles, however, all the spatial objects must be of the same type. All points, all polygons...   (An example workflow is attached. This workflow was created in Alteryx 10.0)
View full article
When importing a delimited file, have you ever run across an error saying ‘Too many fields in record…’ and it’s something like record number 1478254 and you think to yourself, “how the heck am I going to find that record and correct it?”   Well, let us help you with that.   This message is essentially telling us that some particular row of data, when parsed, has more fields that the first row of data (usually caused by an extra delimiter in that row). The first step in identifying the row is to change the delimiter in the Input Data configuration window. Change the delimiter from the current delimiter to \0 (that is a backslash zero, which stands for no delimiter). This will read in your file but instead of parsing, it’ll just read in the data as is.   Current delimiter:   Versus:   No delimeter:   Now just add a RecordID tool and Filter tool so we can filter on RecordID 2 (or 1478254) to find the row that is holding us up.     Now that you’ve identified the row that is causing issues, you could go back to your original data, correct it and try importing it again.    If you were looking closely at the example data, you may have noticed that record 4 also had a different number of fields as row 1.  After correcting row 2 and importing , again, we would get an error for row 4. Just follow the same procedure to correct additional errors.  This could probably be done through an automated/iterative process, but that will have to wait for another time.   (attached is an example workflow created in Alteryx 10.0)  
View full article
Sometimes, especially when interacting with tools/files outside Alteryx, you need to explicitly control the order of events. The classic example is a process whereby you 1) INSERT to a SQL table where an auto-incrementing ID is applied 2) read the records back in order to get the auto-incremented id 3) insert the auto-incremented ID into a different table.   In Alteryx worflow streams that terminate (as in an Output or Render), you can't reliably use Block Until Done to control the stream. One way to reliably control the order of events is with batch macros.   Batch macros guarantee that 1) every iteration is complete before the next iteration begins, and 2) that all iterations are complete before the macro reports itself complete. So if you place a process inside a batch macro and create an output stream from that macro, all processing will be done and the macro still can exist in the middle a workflow stream. You don't even need a Block Until Done as part of the process.   Please see the attached example (authored in 10.0).
View full article
With the release of 10.0, people are naturally interested in compatibility with workflows they have developed in 9.5. Additionally, many users will be in a “mixed version” environment as their company transitions to 10.0 and are interested in what options are available to them.   Applications developed in 9.5 can certainly be opened and run in 10.0. We perform a lot of testing and technical work to ensure that Forwards Compatibility is fully functional. Anything that was built in 9.5 should continue to work in 10.0.   Undoubtedly, the recommendation is to perform the 10.0 upgrade across the organization when all users are ready for the change. This eliminates any issues with “backward compatibility” and makes it easier and seamless to share applications - workflows, macros, etc.   If you do find yourself operating in a mixed environment, there are some things to be aware of:   A 9.5 app that is opened in 10.0, then saved in 10.0, will no longer be accessible via 9.5 If you publish apps to a server, you will want to make sure that the server version is compatible with the apps you will be publishing. If you have 9.5 apps then you can leverage both a 9.5 server, as well as a 10.0 server. However, if you’re developing in 10.0 then the server will also need to be 10.0 If you need to support both environments for a period of time, it is possible to install two versions of Alteryx on a single machine. Admin and Non-Admin versions can be installed side-by-side. (e.g. If a user has the Admin version of 9.5, they can install the Non-Admin version of 10.0 and switch between them.) Recognize that if you publish apps to the server, you will need to be running the Admin version of the same version as what is running on the server.
View full article
With Alteryx, the power to blend, clean and perform advanced analytics on disparate data is as easy as dragging and dropping tools with the click of a mouse. The new release of Alteryx Designer 10.0   expanded our predictive analytics tools to include MB Affinity, Network Analysis, In-DB Linear and In-DB Logistic regression tools.   Alteryx also has the flexibility to add additional R packages not integrated with our robust collection of predictive tools. We have had a question arise from one of our clients asking where they can find exponential (non-linear) regression. The recommended the R package for this type of analysis is the “nlstools”.   For more information on this R package please find the below link for reference: https://cran.r-project.org/web/packages/nlstools/index.html   Also, please find the link for the Alteryx Gallery app below for reference on how to install additional R packages. If you need help with the process, please let us know, and we can get our R experts in Customer Support to assist with the setup. Just email us at support@alteryx.com.   https://gallery.alteryx.com/#!app/Install-R-Packages/57bb2a58a248970b4472c2e6   Tony Moses Client Services Representative
View full article
Beginning with Alteryx 10.0, the default position of the Results window (formerly the Output window) is across the bottom of the canvas, as show below:     Placing the Results window across the bottom of the canvas helps make workflow development easier. It allows you to see a wide view of your data and it allows you to reference incoming results while configuring a tool.   Users who migrated to 10.0 from a previous version may find that the results window is positioned in the bottom left corner, directly below the Properties window.   We recommend that you move the Results window to the new default position, and there are three different ways to do this:   1) Restore default settings using the menu ( Options > User Settings > Restore Defaults ). Note: This will restore all user setting defaults.   2) Assign a position from a drop-down menu. Click the down arrow in the Results window title bar (circled in red) and choose Dock to > Bottom.   3) Move the window manually. To do this: Double-click in the title bar of the Results window to un-dock it. Click and hold the title bar to drag it around. An interface overlay will appear, showing different docking positions. Drag the window to the interface overlay for the bottom docking position. When your mouse cursor is over the interface overlay, a blue highlight will appear. Release the mouse button. The window will snap to the position indicated by the blue highlight:     By experimenting with these controls, you may find other window layouts that work even better for you. For example, users with more than one monitor will often move the Results window to their second screen.    
View full article
Compariing spatial objects for equality is very subtle, because you have to define what is meant by equal.  In the case of polygons, you can have 2 different polygons that describe the same shape, but have a different start and end point.  Most people would say these are the same, but if you compare them exactly, they are in fact different.  For instance:   [ -104, 39], [ -105, 39 ], [ -104, 38 ], [ -104, 39 ] [ -105, 39 ], [ -104, 38 ], [ -104, 39], [ -105, 39 ]   These 2 sets of points describe the same shape just from different starting points.  The same problem exists with lines - the same line specified backwards and forwards looks the same on a map, but would not be superficially equals.   The easiest way to test for equality is to test: does object A contain object B and also, does object B contain object A.  If they both contain each other, they must cover the exact same area on a map.   In a formula or filter tool, you can say:  ST_Contains([A],[B]) AND ST_Contains([B],[A)   In a SpatialMatch tool, you have to use a custom DE-9IM string to describe the match you want.  In this case, use the string "T*F**FFF*".  I won't try to document that string - see https://en.wikipedia.org/wiki/DE-9IM for a reference of how that works.   Attached is a module (for Alteryx 10) that demonstrates both techniques:
View full article
Question  How do you round a number up?   Answer Be sure to first understand that different people/organizations all have different definitions on what they mean by rounding.  Once you are clear on what you need, you can use Alteryx to round your numbers.   Example 1: Divided by the place you want to round to, use CEIL to round up, then multiply back by the place:   324 to 400 CEIL([Number] / 100) * 100   Example 2: Ceiling and floor just go to the nearest integer.  The ROUND(x, multiple) function will allow you to specify the level of precision on which to round, very similar to Excel.  But you can't specify the direction, so in your example if your did ROUND(3425.123,1000) it would return 3000.   Example 3:  Please see the attached v10.0 workflow which is more dynamic. The text input identifies the mapping for the conversion joined with length of source input - then an expression is applied in the Formula Tool.         T he final output.      More on rounding.
View full article
A great new feature in v10 is the ability to turn on profiling to see which tools are taking up most of the time in your module.   You can turn it on here in the Workflow Configuration in the Runtime section:       This setting will add a profile of the tool times (in descending order) to your output log:     Thanks to @ScottL for mentioning this at the Analytic Independence Roadshow in London, it's really useful for helping optimise workflows, and as he pointed out the screenshot shows off the speed of the regex engine too!    
View full article