Inspire EMEA 2022 On-Demand is live! Watch now, and be sure to save the date for Inspire 2023 in Las Vegas next May.

Alteryx Promote Knowledge Base

Definitive answers from Promote experts.

Promote Prediction Requests FAQ

Alteryx Alumni (Retired)

One of the most important features of Promote is its ability to return near-real-time predictions from deployed models. Here is a list of frequently asked questions relating to Promote prediction requests.

Is there a way to prioritize prediction requests?

Currently, there is not a native method for prioritizing prediction requests. When a request is sent to Promote, the request enters a model-specific queue. If there is not a replication of that model available to make a prediction, the request will wait until a model replicant becomes available or the prediction request times out. To modify the prediction timeout period for a model, please see Increasing the Prediction Timeout for a Model.

What is the minimum amount of time it takes for a request to be processed?

  • Network time to Promote
  • Internal Promote network / Authentication time
  • Model processing time
  • Internal Promote Network time
  • Network time back from Promote

Promote has no control over the model processing time. If a user builds a model that sleeps for 5 seconds, it will take 5 seconds to execute each prediction. Promote typically adds ~40 milliseconds to the processing time of a prediction. Fast models such as Linear Regression and Logistic Regression, etc typically see end-to-end times of ~60-120milliseconds, assuming internet speeds are fast.

Can Promote handle Streaming Data?

Currently, Promote cannot handle streaming data. It only accepts REST API requests for model predictions.

What happens when a request fails? Is the failed request handled by Promote,or does Promote return an error and the application the request is embedded in must handle it?

A failed request can occur for several reasons. Theresponse returned by Promote will depend on the nature of the failed request.

1. The model fails to process the data correctly.

Request (note that“nae” in -data should be“name”)
$ curl -X POST -H "Content-Type: application/json" \
—user username:apikey \
—data '{"nae": "colin"}' \

Response (Note that“error” is the key in the“result” object)
{"result":{"error":"Missing keys: 'name'"},"promote_id":"f78c0133-767b-4fa7-8f41-ab1f4eed2fad","status":"OK","timestamp":"2018-02-16T15:19:01.753Z","model_name":"HelloModel","model_version":"1"}

2. The route to the model is offline or doesn't exist

$ curl -X POST -H "Content-Type: application/json"
—user username:apikey


3. The request is unauthorized

$ curl -X POST -H "Content-Type: application/json" \
—user undefined:undefined \
—data '{}' \


Can a model endpoint be sent to customers? How are DDoS attacks handled, etc.?

Promote is designed to be integrated directly with backend systems (e.g., server backends, CRM systems, etc.). In configurations like this, requests for predictions are created by the backend system and then sent to Promote for a prediction. Predictions are then returned to the backend system from Promote, and finally, the backend service will return the prediction to the user. This set-up prevents Promote from potentially being subjected to DDoS attacks, as all network traffic must first pass through the backend service, prior to hitting the Promote network.

No ratings