Engine Works

Under the hood of Alteryx: tips, tricks and how-tos.

Server Runner Macros


This article is an update to a previous blog post, Server Runner Macros, which demonstrated workflow orchestration using the Alteryx Server API. In subsequent releases of Alteryx Server, there have been changes to the API. The authentication steps and API endpoints presented in the article are now deprecated.


2021.4 API Enhancements


Alteryx Server received a number of enhancements with the 2021.4 release. Many of these updates introduced new capabilities to the Server REST API. If you have been leveraging the API to orchestrate workflows, there are two notable changes to be aware of when planning an upgrade.


  • Authorization: Oauth2.0 was introduced for authorizing requests to the Alteryx Server API. This update enables authorization to be executed using standard tools in Alteryx Designer.  Previously, a script was necessary to generate the Oauth1.0a signature. Oauth1.0a was deprecated with the 2022.1 release.

  • Endpoints: new v3 admin endpoints were introduced. Both v1 and v2 endpoints are still supported, but their paths have been updated.


Workflows that make requests to the Server API will need to be updated to accommodate these changes prior to upgrading. For more information on the API enhancements, check out the article Introducing the Alteryx Server v3 API Endpoints.


Example Macros


As mentioned above, the example macros shared in the Server Runner Macros blog post will not work beyond version 2021.3. They have been updated to work with version 2021.4 and are attached to this article. These macros are intended as working examples. You may use or modify them to suit your needs, and they should be thoroughly tested in a non-production environment.


  • Alteryx Server Runner: creates job on Alteryx Server for the specified app id. One request is made per row of incoming data.


  • Alteryx Server Job Status: retrieves the status of the specified job id. The macro can be configured to wait until the job returns a “Completed” status. One request is made per row of incoming data.


Orchestration Examples


Sequential Execution


In the example below, the runner macro is the final tool in the workflow. The Block Until Done tool is used to ensure all records are written to the database before the request to the Server API is made. The example runner macro will execute 1 API request per row of incoming data. A Sample tool is used prior to the runner macro to ensure that only one request is made. The runner macro creates a job on the Server for the next app in the sequence.


server runner.png 

Simultaneous Execution


As mentioned, the example runner macro will submit one request to the Server API for each row of incoming data. The goal of simultaneous execution is to run the same workflow with different inputs. Running them simultaneously reduces the overall processing time. In the example below, the column “Department” contains three rows of data or the three inputs we want to run through the app. The runner will create a new job on the Server for each input.




Nested Execution


When a job is created through the Server API, it does not wait until that job has been completed before returning a response. It simply creates the job on the Server, then returns a response indicating if the job was successfully created. Using the example macros in tandem enables nested execution. In the example below, the runner creates a new job on the Server. The newly created job id is passed to the job status macro, which checks the status each minute until the job is complete. Read the considerations before proceeding with a nested execution.






Simultaneous Workflows


Alteryx Server is configured to run a specified number of workflows simultaneously. Do not use nested execution if you have a single-node server configured to run one workflow simultaneously. The diagram below illustrates why this scenario will not succeed in the context of the example macros.


server runner 2.png


In the scenario above, Workflow 1 is executing and creates a job for Workflow 2 on the Server. Workflow 2 will be queued because Workflow 1 is already executing. Meanwhile, the job status macro is waiting for Workflow 2 to complete, which is blocked from executing by Workflow 1. The job status macro is configured to timeout after ten status checks have been completed. Once the timeout is reached and Workflow 1 completes, Workflow 2 will begin executing. However, the processing is blocked on the Server until this occurs, which can create bottlenecks for other scheduled or manually executed workflows on Server.


AMP Engine


In the 2022.1 release, AMP Engine is enabled by default. The recommended configuration for Server nodes that execute AMP is to run one workflow simultaneously. If you are using this configuration on a single-node Server, you should not use nested execution. Additionally, the job status macro uses the Throttle tool to wait the specified number of minutes between status checks. The Throttle tool is not currently supported with AMP. Alternatively, you could leverage the Python tool or the Run Command tool to execute a script that will make the workflow wait.


When to Use a “Runner” Macro


Alteryx provides many capabilities for orchestration—the Server API is just one of them. Batch macros are often the best starting point for orchestration use cases. A batch macro is blocking, meaning all the data must arrive before it is passed to the next tool. It accomplishes the same goal as nested execution, is easier to implement and maintain, and does not run the risk of causing the bottlenecks described above. Additionally, multiple workflows can be converted to batch macros and chained together in a parent workflow to achieve sequential execution.


Generally speaking, the best use case for a “runner” macro is the simultaneous execution example. Suppose you had multiple files that needed to be processed through the same workflow. Creating a job for each file on the Server allows more than one to be processed at a time; the jobs execute independently so that an issue with processing one file does not impact another, and each will have its own log. The Server queue manages the jobs and respects the Server configurations and resources. 


Matt Orr
Solution Engineer

Matt Orr has been in the data and analytics field for over a decade holding roles in Business Intelligence, Data Science, and Data Strategy. As a Solution Engineer, he enjoys supporting customers achieve analytics transformation with the Alteryx platform.

Matt Orr has been in the data and analytics field for over a decade holding roles in Business Intelligence, Data Science, and Data Strategy. As a Solution Engineer, he enjoys supporting customers achieve analytics transformation with the Alteryx platform.