This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
You know what they say; the devil is in the details. Precious little separates a good report from a great one and it often comes down to just a few details. Don’t let a report header be the detail you left out – use the Report Header Tool to painlessly add header reporting elements to your reporting creations!
With all the bells and whistles to play around with in the Reporting Tool Category , it’s hard not to leave some out of your reporting workflows every now and again. Just don’t forget about the Report Text Tool – the tool that’ll help you painlessly add text to your reporting objects, presentations, or documents to help spruce up their readability or formatting.
Believe it or not, data can be beautiful. Take your black and white data points and add some color to them in visuals with the suite of tools found in the Reporting Category https://help.alteryx.com/current/index.htm#Getting_Started/AllTools.htm#Report_Presentation_Tools ! If you’re looking to create reports, presentations, images, or simply output data with a bang, you can use the Render Tool https://help.alteryx.com/current/PortfolioComposerRender.htm paired with other Reporting Tools to create HTML files (*.html), Composer files (*.pcxml), PDF documents (*.pdf), RTF documents (*.rtf), Word documents (*.docx), Excel documents (*.xlsx), MHTML files (*.mht), Power Point presentations (*.pptx), PNG images (*.html), and even Zip files (*.zip) – packed with formatting and visual aesthetic that’ll make any data-geek’s mouth water.
When you’re frequently writing and rewriting data to Excel spreadsheets that you use for Excel graphs and charts, it can quickly become a hassle to make and remake your reporting objects to keep them up-to-date so you’re visualizing the most recent data. A best practice to keep the hassle out of the process exists, though! If you keep your plots isolated to their own spreadsheet, referencing cell values in another sheet used to capture your data, you can simply overwrite the source data sheet and your plots will update automatically upon launching Excel. In the example below (attached in the v10.6 workflow Dynamically Update Reporting from Excel Spreadsheets.yxzp) we’ve included the workaround to make your Excel outputs seamless.
The Auto Field Tool : a tool so easy you don’t have to do anything – just put it on your canvas and viola. Automatically optimized data types. If you’re running into data type related issues and errors in your workflows, or just looking to add some speed or reduce the occupied disk space your data is hoarding – look no further than the Preparation Tool Category ’s Auto Field Tool, which reads through all the records of an input and sets the field type to the smallest possible size relative to the data contained within the column.
This article is part of the Tool Mastery Series, a compilation of Knowledge Base contributions to introduce diverse working examples for Designer Tools. Here we’ll delve into uses of the Image Tool on our way to mastering the Alteryx Designer:
A picture is worth a thousand words, right? Save your breath and snap a picture to supplement your analyses and reports with the Image Tool, the camera icon tool residing next to all your other reporting needs in the Reporting Tool Category. Whether you’re looking to build a presentation, report, or email from scratch, or simply add graphics to accentuate your raw data – this tool will make it a breeze to access image files from disk, store image files in physical workflows, or dynamically access image files (even in Blob format!) to pair with any Alteryx output. Use the Image Tool to:
Add visual assets to reports and presentations (attached in the v10.6 Image.yxmd):
Perform dynamic image substitutions (attached in the v10.6 Image.yxmd):
Supplement reporting tables with graphics to make raw data more readable
By now, you should have expert-level proficiency with the Image Tool! If you can think of a use case we left out, feel free to use the comments section below! Consider yourself a Tool Master already? Let us know at firstname.lastname@example.org if you’d like your creative tool uses to be featured in the Tool Mastery Series.
Stay tuned with our latest posts every #ToolTuesday by following @alteryx on Twitter! If you want to master all the Designer tools, consider subscribing for email notifications.
When it comes to spatial analyses, few tools come up more than the Trade Area Tool . Whether you’re looking to pad polygons around your spatial objects in distance or drive time, you won’t need to make a trade-off - just the Trade Area Tool.
The Find Replace Tool is one of those tools that goes relatively unused and uncelebrated until you stumble into a data blending technique that would be extremely difficult without it – at which point, it becomes your favorite tool in the Designer. You can find it in the Join Category and it’ll make easy string substitutions in your data that would otherwise require herculean effort to work around. Today, we celebrate Find Replace as a hero.
A common application in spatial analytics is to visualize and analyze data in multi-ring trade areas. This type of analysis is helpful for analyzing data in incrementally increasing distances from a spatial object. Such examples might include quantifying the number of customers with a 10, 20, and 30 minute drivetime, analyzing the demographics of a population within certain distances of a location, or visualizing the strength of a cell tower signal with increasing distance from the tower. Depending on the analysis to be done, the multiple Trade Areas may need to be configured as non-overlapping (separate rings) as opposed to nested as concentric circles. Using the Trade Area tool , both types of spatial objects can be created!
Creating Multiple Trade Areas (Overlapping)
To create multiple Trade Areas as nested concentric circles, select the option to specify the Trade Area radius as a 'Specific Value' and enter the radii for each Trade Area. A polygon output will be created for each specified Trade Area Radius. The sample configuration (Figure 1) creates three polygons (5 mile radius, 10 mile radius and 15 mile radius). Polygons are visualized in the order that they are listed. To create a "bullseye", list the radii for the Trade Areas in descending order. Note that the larger Trade Areas include the areas of the smaller Trade Areas (i.e., the 10 mile radius Trade Area includes the same area specified by the 5 mile Trade Area). To avoid "double counting" of area, consider creating non-overlapping trade areas (described below).
Figure 1: Create multiple Trade Areas around a point spatial object with the Trade Area tool. List the multiple radii (and specify the unit of miles, kilometers or DriveTime minutes), separating each with a comma.
Creating Non-Overlapping Trade Areas:
To create multiple non-overlapping Trade Areas ("doughnuts"), select the option to specify the distance ranges as 'Specific Values' and enter the distance ranges for each ring. Distance intervals should be specified as ranges (0-5 and 5-10), which each range separated by a comma (Figure 2). As a result, each ring is created as an individual polygon that begins at the distance specified by the minimum of the range and ends at the distance specified by the maximum of the range.
Figure 2: Each "doughnut" is specified as a distance range to create non-overlapping spatial objects. The highlighted "doughnut" below represents the second distance range (5-10 miles) from a point spatial object.
* For additional information on tool configuration, see the attached workflow (created in v10.6).
In a workflow, not too far, far away...
Structured data has vanished. In its absence, the sinister Dirty Data Order has risen from the ashes of the Server and will not rest until Data Analytics have been destroyed.
With the support of the Alteryx Engineers, Solutions Engineer Tony Moses leads a brave RESISTANCE. He is desperate to find structured data and gain its help in restoring blending, joining and analytics to the galaxy.
Tony has sent his most daring Community Leader, Matt DeSimone, on a secret mission to Jakku, where an old ally has discovered a clue to the structured data whereabouts....
Welcome to the Star Wars universe!
Ever wanted to know the most important details of your favorite characters from Star Wars? Me too!
Our generous friends, Paul Hallett and team, have given us the Star Wars API - the world's first quantified and programmatically-accessible store of Star Wars data.
After hours of watching films and trawling through content online, Paul presents us all the People, Films, Species, Starships, Vehicles and Planets from Star Wars.
The data is formatted in JSON and has exposed it to us in a REST implementation that allows us to programmatically collect and measure the data.
Now, how was I able to retrieve this treasure of information via Alteryx? Easy! I've built a REST API connection using the Download Tool to pull information based on a user inputted query in an Alteryx Application (attached as v2018.1 Star Wars.yxwz).
Normally, once having retrieved JSON formatted data, structuring and parsing the data would be a nightmare! With Alteryx, this is just one tool away. The JSON Parse Tool allows you to identify the JSON field, in this case our download data field, and easily extract Name and Value columns. From there it's some simple formatting and using the reporting tools to present us a nice clean composers file (pcxml).
Man, if only the Rebels could process information as fast as Alteryx then they wouldn't have had to send poor R2 to find Obi Wan.
I'll be bringing you, the Alteryx Community, updates of the app with each new movie release!
I hope you enjoy the API and may the Force be with you!
Question How can I create a Cartesian join of my data?
Answer A Cartesian join, also known as a Cartesian product, is a join of every row of one table to every row of another table. For example, if table A has 100 rows and is joined with table B, which has 1,000 rows, a Cartesian join will result in 100,000 rows. This type of join can be useful when you need to produce a dataset that contains every combination of two or more tables. You can continue to add to the join by adding additional Append Tools. Simply connect the output of the first Append Tool to the Target Input of a second Append Tool and connect the third table to the Source Input of the second Append Tool. Continue this configuration for as many tables as necessary.
In the example below, a car dealer needed a list of every combination of model, exterior color, interior color and options package. Here is what the four table Cartesian join looks like:
The tables contain the following data:
A sample of the results from the workflow pictured above looks like this:
By default, the Append Tool is set to error on more than 16 appends in order to protect the user from inadvertently producing an extensive amount of records. If your data has more appends than this, you can change this setting as shown below to allow all appends:
This workflow, created in 10.6, is attached.
Sometimes when reading a delimited text file (like CSV) an error like this may appear ‘Error reading “FILEPATH\MYFILE.txt”: Too many fields in record #’
The cause of this error is that too many delimiters were located in that specified record.
For example if I had a text file with the contents below:
You see that at record 6 (assuming x and y are field headers) I have an extra comma. If I’m specifying the delimiter to be a comma, Alteryx has recognized from the first record that there should only be two columns in this dataset. Since it sees three on record 6, it will error.
If you had the opposite case where you started with three commas for each record then suddenly had two like this:
You would just get a warning like this:
'Warning: Input Data (1): Record #6: Not enough fields in record'
And the last record in the third field will just be null.
So how do we get around this? Well we could always go into the text file itself and edit the file. This could be cumbersome if you have a lot of records in the text file. Instead let’s just use Alteryx to solve our problems.
First, let’s change our Input Data Tool to bring in the file as a non-delimited text file:
Now we can attach the Text To Columns Tool to separate the fields by our specified delimiter.
Now that our fields have been brought in it’s up to you to decide how handle that extra field, we could simply delete it or keep it. If you’re still wondering about its contents we can simply use a filter tool where the Record ID will be set equal to the record number that popped up in the error.
Attached to this article is a workflow I built in 10.6 that should illustrate this example.
The error above is usually due to a company firewall restriction. If you are receiving the error above when installing, please download run the individual installers from this page: http://downloads.alteryx.com/downloads.html. You can choose admin or non-admin depending on the permissions on your machine.
If you are still having trouble downloading, don't hesitate to contact email@example.com.
You have gone through the hard work of creating your macro and now want to make it easy to insert this into your workflow time and time again. You may have, up until this point, been right clicking on the Alteryx Canvas and going to Insert>>>Macro.
However, you can now create your own macro category by going to Options>>User Settings>>Edit User Settings
Within this splash screen you should now be able to see a macros tab:
At this point you can hit the '+' icon and you can give a name to your category and select the file folder where you save your macros in.
Then simply press Apply and OK and the new tool category will appear at the end of your tool pallette.
With the release of 10.6 came awesome new features, and an upgrade in the underlying R version (from 3.1.3 code named "Smooth Sidewalk," to 3.2.3 code named "Wooden Christmas-Tree"). Using the incompatible R version will cause errors in your R macros.
Simply make sure that your Predictive Tools download is the version compatible with your Alteryx Designer version:
Users on 10.5 should continue to use the R3.1.3 version.
When using Alteryx and Microsoft Revolution R Enterprise, a separate predictive tools install is required (in green). For details, see the Alteryx and Revolution Analytics Integration Guide.
And remember to use the non-Admin Predictive version with non-Admin versions of Alteryx Designer.
To install Predictive Tools for Alteryx 10.0, go to Previous Releases. For Alteryx 9.5, within Designer, go to Help > Install Predictive Tools.
Question Can Alteryx run a Powershell script or perform any Powershell specific commands?
Answer Yes! We can use the Run Command Tool to do exactly that.
Note: In order to run powershell scripts you must make sure you have scripting enabled. You should consult with your IT department to see if you are allowed to enable this functionality.
Below is an example I made in 10.6 demonstrating the necessary run command tool configuration. The command will just be “powershell” to enable powershell mode in cmd.exe. Then your command arguments should be the path where the script is located so it can run:
In this particular case I want to read into the Designer the results of my script, so I specify the file that is being written as the Read Results. My “helloworld.ps1” script only contains the below:
"Hello World" | Out-File c:\temp\test1.txt
As you can see, this kicks off in the Designer and opens the script output file to continue downstream, successfully implementing Powershell scripting:
In a recent article (Create an Indexed Map), I mentioned the indexed maps found in the Rand McNally Road Atlas. Well, also found in the Road Atlas are mileage charts, or, distance matrices. These matrices can be easily created in Alteryx. The example below will provide the distance between every store in a dataset to every store in that same dataset. Here's how we did it.
Distance Matrix/Mileage Chart Example
Create a Cartesian Join of your data
Using the Append Tool, create a Cartesian Join of all of the records in the dataset.
This will give you a combination of every record to every record in your dataset.
Don't forget to "Allow All Appends" (learn more about creating Cartesian Joins here).
Measure the distances between all record combinations
Using the Distance Tool, measure the distances between the point combinations.
Flip the data into columns using the Cross Tab Tool
Use a Select Tool to change the Store Number column to a string
Doing this will prevent the Table Tool from adding commas to this field.
Use a Table Tool to create a formatted table
Add a Column Rule to the Store Number field to format the column as bold.
Create a Row Rule in order to force a one decimal place to the distance data.
Here's the workflow (created in 10.6), which you can also find attached:
That's it. Feel free to leave any comments or ask any questions.
This question is highly relevant when looking to circumvent Alteryx error's when you have hardcoded a certain workflow process to expect certain field names. So when you are trying to process multiple files with different field schema's you don't want Alteryx to stop processing all the files if one file fails. You ideally want Alteryx to skip the file and carry on!
Attached is a sample workflows built in Alteryx 10.6 which can help you answer the above question:
The workflow will start with a Directory Tool browsing to the folder location that has all your input files
As this example uses xlsx files, we use a Formula Tool to append the sheet name to the full path
We then feed this data into a Batch Macro
The batch macro allows us to process one file at a time but each file goes through the same process
The input file within the macro is replaced by the incoming file path
We then use a RecordID Tool to allow us to keep the data integrity when we transform the data for the filtering process downstream
In the Transpose Tool we use the RecordID tool as a key field to pivot on. We then transpose the rest fo the data fields to create a NAME (This will have your field headers) and VALUE (This will have your data) fields. This part will be dynamic if new fields get added because we have the 'Dynamic or unknown fields' checked and they will fall within the NAME field which will reference in the filter tool
Within the Filter Tool you can now add in your field variables you need for your workflow
On the true side of the filter you will now have the fields and values which met your criteria and on the False you will have the fields which did not
The ultimate goal though is to bring back the whole dataset if that file had fields which met your criteria
To do this we use the Formula Tool to add in the file path of the file which we can use outside of the macro to bring together the whole dataset
The final tool is the Crosstab Tool to orientate the data back into its original format using the RecordID Tool as the group by field.
Save the macro (File>>Save As)
Insert the macro into the workflow and connect to the Formula Tool. In the configuration of the macro choose 'Full path" from the dropdown. This will update the input tool and the Formula Tools
The two outputs on the macro refer to the true and false side of the filter. You can now use a join tool and connect the true and false to the left an right inputs of the Join Tool
The field you will join on will be the full path and RecordID
Now if the file met your condition in the filter it should have values on the left and right of the macro outputs. Therefore, in the 'J' node of the Join Tool you should see the data from your successful file
In the 'R' you should see all the data from the files which did not meet your condition as they don't have anything to join to on the left 'true' side of the filter
You can then paste your desired workflow to the 'J' output of the join tool and continue your data process
This will now only allow files with the desired field headers to pass through and you have circumvented your workflow from breaking if the incorrect field schema from certain files are passed through
S/O to Shaan Mistry who brainstormed this workflow with me.