This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
This post is part of the "Guide to Creating Your Own R-Based Macro" series.
There are two major repositories of R packages, CRAN (the Comprehensive R Archive Network) and Bioconductor . The Bioconductor repository has over 1000 packages, which are focused specifically on bioinformatics related applications, while CRAN does not focus on a specific application area, and has over 6000 contributed packages. In general, the functionality you will want to bring to Alteryx via R will be from a package that is on the CRAN repository.
With over 6000 packages, searching for a CRAN package with specific functionality by browsing through the contents of the CRAN repository is not very practical. The two ways I recommend finding a relevant package is by either looking at the appropriate "Task View" (a description of available packages that address a particular application), or doing a web search on the feature you are hoping to obtain, coupled with the addition of "R" to the search string.
For this macro, I used the web search approach, and entered the search string "entropy information gain R" into my preferred search engine. The first hit on this search was a link to the CRAN package FSelector . Examining the documentation to this package revealed that the package delivered the desired functionality through a function called information.gain, and this was one of three entropy based measures the package provides (the other two measures are the gain ratio and symmetrical uncertainty). All three of these functions took as arguments a formula of the form
target ~ predictor1 + predictor2 +...+ predictorN
and an R data frame (R's equivalent of a data table) containing the data. The output of each of these functions is a data frame that contains a single column with the value of the selected measure with one row for each of the predictor fields. The predictor field names are contained in the row.names metadata element of the data frame. We will make use of this information in creating an Alteryx macro to wrap this functionality.
The FSelector package provides exactly what we need, so it is time to install the package. There are a number of ways to install an R package in a way that allows it to be used with Alteryx. The one complication that can arise in doing this is on user machines where multiple copies of R are installed. For users not using Microsoft R, the Alteryx predictive installer places the R executables within the Alteryx installation (usually C:\Program Files\Alteryx ). To make sure you are installing packages into the version of R Alteryx is using, open a command prompt and enter the command
making sure to use the quotes. This will bring up the R console program. In the console window, type the command
install.packages ( "FSelector" )
This will bring up a GUI asking you to select a CRAN mirror to download the package from, along with its dependencies (there are several). Select a mirror that is geographically close to you for best performance. In addition, the FSelector package makes use of several other packages that call Java, so you also need to have a JVM installed on your computer to create and use this macro (I'd recommend the Windows x64 Offline version available here).
Once R is done downloading and installing the packages, make sure that FSelector and all its dependencies were correctly installed. To do this, in the R console enter the command
This will cause R to load the FSelector package. If you did get an error message that some packages were not available (one possibility is the RWekajars package), install them using the install.packages command in the R Console. Once the needed packages have been installed, you can exit the R Console program.
Most of the Alteryx advanced analytics capabilities - including most of the tools in the Predictive, AB Testing, Time Series, Predictive Grouping, and Prescriptive categories - are built as R-based macros under the hood. If there's a piece of functionality that you're looking for that's lacking in Alteryx but is available in R and you have modest R coding abilities, you can extend Alteryx by creating your own R-based Alteryx tool.
The macro creation process involves four steps (q uick links to the guides in the series ):
Find and install an appropriate R package to provide the needed functionality.
Develop an Alteryx workflow that makes use of the relevant R functions via the use of an R tool. This workflow becomes the basis of the macro.
Create a macro that provides the basic functionality you want, and test it in a new workflow.
Polish the macro by documenting it, giving it the ability to generate a report, and doing other things to make it more polished.
The various Alteryx files created in this tutorial are attached to this post.
Once you've created the new tool, don't forget to share it with the wider community by publishing it to the Alteryx Analytics Gallery.
Recently I have been working with an existing customer that is considering expanding the use of Alteryx within their organization to include other groups. Some of those groups are focused on developing predictive analytics models, and currently its members are using a number of different software products. Based on this, there are certain features that they often use in some of those products that are not available "out of the box" in Alteryx. While these features are heavily used by some members of this group, they aren't as widely used in general. A trade-off we face in developing Alteryx is to provide generally needed functionality without blowing up the number of available tools to where their sheer number becomes overwhelming to new Alteryx users.
In a number of instances we have developed new tools at the request of customers to address their needs, providing them with the tools immediately, and then folding them into a subsequent release of the product or publishing them to the Predictive District on the Alteryx Analytics Gallery. A particular case in point is the MB Affinity tool, which was part of the 10.0 release of Alteryx. The MB Affinity tool provides cosine similarity/distance measures for items. This is a common method used in creating recommendation systems of the "people who bought this item also bought" variety.
Getting back to the issue faced by the predictive analytics team of our current customer, one feature of another product that they currently use, which isn't currently pre-packaged in Alteryx, is a tool that examines the importance of potential numeric predictors for a categorical target field using an entropy based measure known as information gain or Kullback–Leibler divergence . In this series, I illustrate how to create an Alteryx macro that provides this measure.
Alteryx has a full set of integrated predictive tools but even with developers working at full speed , it is hard to keep up with the R community. Sometimes users want to install and utilize their favorite R packages. This post demonstrates how to use and install additional R packages.
With the release of 11.0, we see numerous changes to many tools in the Designer. The Linear Regression Tool gets a UI makeover and some cool new features are added that we will explore in this article. If you are new to performing regression analysis in Alteryx, I highly recommend checking out the Tool Mastery article which goes into everything there is to the old tool. Everything presented in that article remains valid as no features were removed. In this article, we will delve into the changes and new features.
This tool provides a number of different univariate time series plots that are useful in both better understanding the time series data and determining how to proceed in developing a forecasting model.
The Association Analysis Tool allows you to choose any numerical fields and assesses the level of correlation between those fields. You can either use the Pearson product-moment correlation, Spearmen rank-order correlation, or Hoeffding's D statistics to perform your analysis. You can also have the option of doing an in-depth analysis of your target variable in relation to the other numerical fields. After you’ve run through the tool, you will have two outputs:
Question I am building a forecast for my company using the Time Series forecasting model. The sample workflow that Alteryx currently provides uses one product to forecast. I have multiple products I need to forecast - is there a way I can add a product column so I could forecast for all the products at one time?
Answer The tools you're looking for are the TS Factory Tools, available in the Predictive District in the Gallery:
These tools estimate time series forecasting models for multiple groups at once using the autoregressive moving average (ARIMA) method or the exponential smoothing (ETS) method; they also provide forecasts from groups of either ARIMA or ETS models for a user-specified number of future periods.
Just like the original tools, the ETS method in the TS Model Factory does not allow fields related to the target variable (covariate fields) to be used in the model creation. However, the Autoregressive Moving Average (ARIMA) method does allow the use of covariates.
There's a sample workflow that demonstrates these tools with a use case involving bookings and website traffic for a hotel chain with four locations in the Denver metro area.
Linear regression is a statistical approach that seeks to model the relationship between a dependent (target) variable and one or more predictor variables. It is one of the oldest forms of regression and its applications throughout history have been endless for modeling all kinds of phenomena. In linear regression, a line of best fit is calculated using the least squares method . This linear equation is then used to calculate projected values for the target variable given a set of new values for the predictor variables.
Sampling weights, also known as survey weights, are positive values associated with the observations (rows) in your dataset (sample), used to ensure that metrics derived from a data set are representative of the population (the set of observations).
Neural Networks are frequently referred to as "black box" predictive models. This is because the actual inner workings of why a Neural Network sorts data the way it does are not explicitly available for interpretation. A wide variety of work has been conducted to make Neural Networks more transparent, ranging from visualization methods to developing a Neural Network model that can “show it’s work”. This article demonstrates how to leverage the NeuralNetTools R package to create a plot of the Neural Network trained by the Alteryx Neural Net tool.
The Alteryx Forest Tool implements a random forest model using functions in the randomForest R package. Random forest models are an ensemble learning method that leverages the individual predictive power of decision trees into a more robust model by creating a large number of decision trees (i.e., a "forest") and combining all of the individual estimates of the trees into a single model estimate. In this Tool Mastery, we will be reviewing the configuration of the Forest Model Tool, as well as its outputs.
Typically the first step of Cluster Analysis in Alteryx Designer, the K-Centroids Diagnostics Tool assists you to in determining an appropriate number of clusters to specify for a clustering solution in the K-Centroids Cluster Analysis Tool, given your data and specified clustering algorithm. Cluster analysis is an unsupervised learning algorithm, which means that there are no provided labels or targets for the algorithm to base its solution on. In some cases, you may know how many groups your data ought to be split into, but when this is not the case, you can use this tool to guide the number of target clusters your data most naturally divides into.
In statistics, standardization (sometimes called data normalization or feature scaling) refers to the process of rescaling the values of the variables in your data set so they share a common scale. Often performed as a pre-processing step, particularly for cluster analysis, standardization may be important to getting the best result in your analysis depending on your data.
Clustering analysis has a wide variety of use cases, including harnessing spatial data for grouping stores by location, performing customer segmentation or even insurance fraud detection. Clustering analysis groups individual observations in a way that each group (cluster) contains data that are more similar to one another than the data in other groups. Included with the Predictive Tools installation, the K-Centroids Cluster Analysis Tool allows you to perform cluster analysis on a data set with the option of using three different algorithms; K-Means , K-Medians , and Neural Gas . In this Tool Mastery, we will go through the configuration and outputs of the tool.
The Neural Network Tool in Alteryx implements functions from the nnet package in R to generate a type of neural networks called multilayer perceptrons. By definition, neural network models generated by this tool are feed-forward (meaning data only flows in one direction through the network) and include a single Hidden Layer. In this Tool Mastery, we will review the configuration of the tool, as well as what is included in the Object and Report outputs.
With the introduction of the Predictive Analytics Starter Kit , you can enhance your analytic skills through an interactive, guided starter kit that teaches core predictive modeling techniques (A/B testing, linear regression, and logistic regression)
R is an open-source programming language and software environment, specifically intended for statistical computing and graphics. The Alteryx Predictive Tools install includes an installation of R, along with a set of R Packages used by the Predictive Tools. This article describes how to determine which R packages (and versions) are installed for used with your Alteryx R Tool, as well as a few Alteryx-specific packages on Github.
You want to impress your managers, so you decide to try some predictions on your data – forecasting, scoring potential marketing campaigns, finding new customers… That's great! Welcome to the addictive world of predictive analytics. We have the perfect platform for you to start exploring your data.
I know you want to dive right in and start testing models. It's tempting to just pull some data and start trying out tools, but the first and fundamentally most important part of all statistical analysis is the data investigation.
Your models won't mean much unless you understand your data. Here's where the Data Investigation Tools come in! You can get a statistical breakdown of each of your variables, both string and numeric, check for outliers (categorical and continuous), test correlations to slim down your predictors, and visualize the frequency and dispersion within each of your variables.
Part 1 of this article will give you an overview of the Field Summary Tool (never leave home without it!) Part 2 will touch on the Contingency and Frequency Tables, and Distribution Analysis; Part 3 will be the Association Analysis Tool, and the Pearson and Spearman Correlations; and Part 4 will be all the cool plotting tools.
Always, every day, literally every time you acquire a new data set, you will start with the Field Summary Tool. I cannot emphasize this enough, and I promise it will save you headaches.
There are three outputs to this tool: a data table containing your fields and their descriptive statistics, a static report, and the interactive visualization dashboard that provides a visual profile of your variables. From this output, you can select subsets to view, sort each of the panels, view and zoom in on specific values, and it even includes a visual indicator of data quality.
You'll get a nifty report with plots and descriptive statistics for each of your variables. Likely the most important part of this report is '% Missing' – ideally, you want 0.0% missing. If you are missing values, don't fret. You can remove these records or impute those values (another reason knowing your data is so important).
Also check 'Unique Values' – if you have a single unique value in one of your variables, that won't add anything useful to your model, so consider deselecting that variable.
The Remarks field is also very useful – it will suggest field-type changes for fields with a small number of unique values, perhaps that should be a string field. Or, if some values of your field have a small number of value counts, you may consider combining some value levels together.
The better YOU know your data, the more efficient and accurate your models will be. Only you know your data, your use case, and how your results are going to be applied. But we're here to help you get as familiar as you can with whatever data you have.
Stay tuned for subsequent articles – these tools will be your new best friends. Happy Alteryx-ing!
The Append Cluster Tool is effectively a Score Tool for the K-Centroids Cluster Analysis Tool. It takes the O anchor output (the model object) of the K-Centroids Cluster Analysis Tool, and a data stream (either the same data used to create the clusters, or a different data set with the same fields), and appends a cluster label to each incoming record. This Tool Mastery reviews its use.