This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Models deployed to Promote can be queried through a couple of different ways, one of them being a standard REST API post request. Querying a model consists of sending in the predictor variables to the model, allowing the model to process the data and make predictions. After the prediction is made from the model, the return is the score based on the predictor variables entered.
This article provides an overview of the administrative options in Promote (excluding user management, which can be found here).
Within the Admin Dashboard, an Admin user can view a list of all the models deployed to any environment by any user by clicking on the Models tab.
Within the Admin dashboard, an Admin user can monitor system health metrics for each node in the Promote cluster by clicking on the System Overview tab.
Within the Admin dashboard, an admin user can adjust several settings that affect the performance and behavior of the system by clicking on the Advanced tab.
An Admin user can change the base image used to deploy both R and Python models. An admin user may do this if they create a new image that has custom R or Python libraries available on it, or if they'd like to use a different version of R or Python.
Disk Bundle Limit
An admin user can change the disk bundle limit to protect the system against running out of disk space. The disk bundle limit limits the number of versions of a model that are stored on disk.
Promote can store logs for every prediction request made for up to 14 days, with a maximum of 50GB. You can toggle this logging on and off for Development/Staging and Production in this section.
We hope this gives you a good foundation for administering your Promote instance. Good luck, we're all counting on you.
There are two tools in Alteryx Designer that connect to Promote; the Deploy tool and the Score tool. The Deploy tool allows you to send trained models from Alteryx Designer to Promote. The Score tool allows you to connect to a model that has already been deployed to Promote to create predictions from a provided data set.
The first step is to have a model object from a trained model. You can use any of the standard Predictive Tools to train a model (including the R tool), as long it is not from the revoscaler package, which is not currently supported by Promote.
In this example, let's say we are interested in training a random forest model to predict forest type (classification) based on remotely sensed spectral data. The study area covers Japan, and the predictor variables include values for visible to near-infrared wavelengths, derived from ASTER satellite imagery.
After performing some data investigation and pre-processing (this dataset is already very clean) we can create, refine, and ultimately select our model.
Once we have a model we are happy with, we can send it to Promote using the Deploy tool. You can start by adding a Deploy tool to the canvas and adding it to the O anchor of your selected model.
If you haven't already, connect your Alteryx Designer instance to Promote.
To being the process of adding a Promote connection, click the Add Connection button in the Configuration window of the Deploy tool.
After clicking the Add Connection button, a modal window will pop up on your screen. Type your Promote instance's URL in the first screen and click Next.
Now add your Username and API key.
For your API key, you may need to log in to your Promote instance and navigate to the Account page.
Once you have your username and API key correctly added to the modal window, click Connect. If all your information checks out, you will see this success message.
After clicking Finish, there will be an option in your Alteryx Promote Connection drop-down menu. You will also see a new option to Remove Connection.
To deploy a model, give it a name in the Model Name setting and run your workflow. If this is a new or updated version of a model that already exists on Promote, give it the same name as the currently deployed version, and check the Overwrite existing model option.
After running the workflow, if the model deploys successfully, you will see a message from the Deploy tool that says "your model is building, check the UI for build logs" in your results window.
To check the build logs, navigate back to the Promote UI in your web browser, click on your model, and then click on the Logs tab. You still see the messages from the model building process. If all is well, the log will end with a "model built successfully" message.
Your model now lives on Promote!
One of the most important features of Promote is its ability to return near-real-time predictions from deployed models. Here is a list of frequently asked questions relating to Promote prediction requests.
Promote is data science model hosting and management software that allows its users to seamlessly deploy their data science models as highly available microservices that return near-real-time predictions by leveraging REST APIs. In this article, we provide an overview of Promote’s technical requirements and architecture.
If you have a model that takes longer than 10 seconds to return results, by default Promote will time out your model API query. If you would like Promote to wait longer than 10 seconds before timing out, you can adjust this timeout setting with an environmental variable called PREDICTION_TIMEOUT.
Often, when deploying a model up to Promote, the model requires certain dependencies to run. These dependencies can be certain functions, files, etc. If your model requires them, you’ll need to create a promote.sh, which contains commands to import these dependencies. This will be one of the factors needed to ensure your model will be set up for success on Promote, because sometimes a model needs a little help.
If we go to https://github.com/alteryx/promote-python we can go into the article-summarizer example, which contains one of these promote.sh files. You’ll notice that if you open the file, you’ll see this command:
python -c "import nltk; nltk.download('punkt')"
This is required because the newspaper package in the model (main.py) requires an NLP dataset. Now, when we deploy the model, the promote.sh file will run at the same time, which will ensure the dependencies live inside the model environment (docker model image). We can now properly test the model in Promote!
If we're looking at an R example (there is one here - https://github.com/alteryx/promote-r-client/tree/promotesh-example/examples/rodbc-model), you will have the same folder structure, except the promote.sh file will look something like this:
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
ACCEPT_EULA=Y apt-get -y install msodbcsql17
apt-get -y install unixodbc-dev
apt-get -y install r-cran-rodbc
apt-get -y install libiodbc2-dev
In this case, our model requires an ODBC driver, therefore our model container will also need it in order to run on Promote. Just as in the above Python example, when we deploy this model, the promote.sh file will run and the proper driver will be installed, enabling us to work and test this model on Promote!
Once you get these all set, you'll be good to venture on and make your model the best it can be!
When making a call to a Promote model, the input data used to make a prediction is sent in a JSON format. When working with an R model, prior to reaching the model.predict() function, the JSON string that was sent to your model is converted to an R format (either an R dataframe or an R list). By default, this conversion is performed with the fromJSON() function in the jsonlite R package.
Some Promote customers have run into an issue where the status of the predictive model will flicker between online and offline continuously on the Promote UI page. This article discusses the cause of the issue, as well as how to resolve it.
Welcome to part 3 of the Supporting Promote series. In this series, we will tackle some common issues and questions, and provide best practices for troubleshooting. This article will step through the process of restoring the Promote web app.
Welcome to part 4 of the Supporting Promote series. In this series, we will tackle some common issues and questions, and provide best practices for troubleshooting. This article will demonstrate backing up and restoring your Promote PostgreSQL database.
Welcome to part 2 of the Supporting Promote series. In this series, we will tackle some common issues and questions, and provide best practices for troubleshooting. In this article, we will be investigating one common "Promote Service Down" scenario - when the promote_logspout and promote_logstash services are down. You can follow these same steps to start troubleshooting other downed services.
Promote uses the application NGINX as a load balancer. In Promote, NGINX is configured to require TLS (Transport Layer Security) or SSL (Secure Sockets Layer) certificates. This article goes through the step by step process of using your own TLS/SSL certificate during installation or updating your TLS/SSL certificates after installation.
PRODUCT: Alteryx Promote
LAST UPDATE: 05/23/2018
Want to get started with Promote, but don't know Python?
This article describes:
What to install for Python (at least one way) and how to install the Promote library (there are many different ways to do this – this is only one)
Where to find some useful documentation (GitHub, Community)
How to test and deploy a Python model into Promote and test it on the web console
How to deploy a model from Alteryx and score it from Alteryx as well as in the web console
Download Anaconda 3.6 (https://www.anaconda.com/download/ ) – this includes Python and other programs you’ll need to get started. I found this the easiest way to proceed.
Once installed, run the Anaconda Prompt (I use this instead of command line).
Update PIP: python -m pip install -U pip
Install Promote package: python -m pip install promote
Obtain the sample models from Github.
Code-Friendly models - R and Python examples
For this document, we will use the Python models
The link will take you to the examples page of the promote-python library. Click on the parent to go up a level.
This page has a ton of useful information that I recommend you read. For getting started, download the library to your computer.
I recommend creating a repository folder on your computer and storing the files there.
Log into the Promote web app
Go to your Account page and obtain your user name and API Key – store these in a file or somewhere easy to access as you’ll need the information.
Now – let’s publish an example from Python into Promote.
Open main.py from the \repositories\promote-python-master\examples\hello-world folder using your favorite text editor. I like Sublime Text ( https://www.sublimetext.com/ ) but you can use whatever you’d like. Windows 10 has a built-in editor or some people prefer Notepad++.
Edit the USERNAME, API_KEY, and PROMOTE_URL information with your information. I would recommend copying this section of the code so it’s easy to re-use.
Before deploying, test the model. EXTREMELY IMPORTANT! I’ll show two ways to test the model: Command Prompt and Jupyter
First, comment out the line with the deploy method.
Navigate to the folder that contains the hello-world main.py file you just edited from the Anaconda Command Prompt.
Type python main.py to test the code.
Jupyter – this is great for testing and learning Python in my opinion. NOTE: you can’t deploy Jupyter notebooks to Promote. https://github.com/alteryx/promote-python/tree/master/examples/svc-classifier
Launch the Jupyter Notebook
Navigate to the hello-world folder
Click on main.py
Copy the code
Go back to the directory page and create a new Jupyter notebook
Paste the code into the notebook
Comment out the deploy method and run the module.
To deploy the model:
Now, edit the main.py file and uncomment the deploy method.
Looking at the instructions from Github (https://github.com/alteryx/promote-python/tree/master/examples/hello-world ), we need to install the requirements.txt file for this model. Type: pip install -r requirements.txt
Now deploy the model by typing python main.py
You should now see your model in Promote
Click on the model and test it from the web console. Model tests require a line-delimited JSON file (.ldjson for short) – see https://github.com/json-ld/json-ld.org
Now, try publishing the IrisClassifier model yourself. To test it, use the following code:
In Alteryx, you can easily deploy a model to Promote
When the workflow is run, the model will be promoted.
You can test the model using the Promote web console.
You can now score sample data using either the Alteryx model, or the model running on Promote.
PRODUCT: Alteryx Promote
LAST UPDATE: 05/23/2018
Alteryx Promote allows model deployments in 3 forms:
Designer via a Predictive Tool (i.e. Logistic Regression Tool --> Deploy Tool)
R (either from the R code tool, or from an R program such as RStudio)
For items 2 and 3, we host public repositories of example models (below) that show how to deploy predictive models. All of these examples include READMEs that explain 1) what the model does, and 2) how to deploy it.
Example Python Models | Example R Models
For Python models, you must:
Have Python3 installed
Install the promote python CLI:
pip install promote
For R models, you must:
Have R installed
Install the promote R CLI: