community
cancel
Showing results for 
Search instead for 
Did you mean: 

Alteryx Promote Knowledge Base

Definitive answers from Promote experts.

An Overview of Promote's Architecture

Sr. Data Science Content Engineer
Sr. Data Science Content Engineer
Created on

Promote is data science model hosting and management software that allows its users to seamlessly deploy their data science models as highly available microservices that return near-real-time predictions by leveraging REST APIs. In this article, we provide an overview of Promote’s technical requirements and architecture.

 

Technical Requirements

 

A Promote Cluster requires three dedicated Linux CentOS 7 Server machines, each with at least four cores. We recommend a minimum of 100 GB of disk space on each machine, as well as 16 GB of RAM.

 

It is not required that these servers be bare-metal, however in order for the Promote instance to be highly-available and fault-tolerant, each node of the Promote instance must be installed on an independent server. Should one of the machines in the Promote cluster crash, the remaining machines will continue to service prediction requests, ensuring that applications with Promote model integrations continue to work properly.

 

Docker Containers and Docker Swarm

 

Promote is based off a software called Docker containers, organized into a Docker swarm. Docker containers are a unit of Software that package an application along with its dependencies (runtime, system tools, system libraries, settings, etc) so that it can be efficiently shared and deployed. If you are not familiar with Docker and Docker Containers and would like to know a little more, please read the Community Article What’s the Deal with Docker.

 

In a Promote Instance, each model deployed to Promote lives in its own separate Docker container, which allows for models to reference different versions of the same package, etc., and live on the same machine without conflicting version issues.

 

A generic machine in a promote cluster can be generalized like this:

 

PromoteMachine.png

 

 

Docker Swarm is a native cluster management feature of Docker which allows a cluster of machines all running Docker to connect with one another. Docker Swarm provides a scalable and reliable way to run many Docker Containers.

 

Docker Services

 

All three nodes in a Promote instance will house Docker services as well as containerized instances of the models that have been deployed to Promote. 

 

Docker services are Docker images for a microservice within the context of a larger application (e.g., Promote) and are used mostly when configuring the leader node of a Docker swarm so that the docker containers will run in a distributed environment and can be easily managed.

 

Each node in the Promote cluster has an instance of the NGINX, LogSpout, and Registrator services running. The NGINX service functions as an internal load balancer for the Promote cluster, using round-robin load balancing. LogSpout is a log router for Docker containers. It captures logs from each container and routes them to a defined destination. Registrator is used to register and deregisters services for any Docker container by inspecting containers as they come online.

 

There are also single instances of the Consul, Logstash, and Elastic Search services that will exist somewhere in the Promote cluster. Consul is a service that enables the rapid deployment, configuration, and maintenance of service-oriented architectures (like Promote!). Logstash is used to manage events and logs. Elastic Search is a RESTful search and analytics engine, which allows for the exploration of Promote data.  Docker swarm determines which of the nodes in the Promote cluster will host each of these services (which can be any combination of none, one, or more instances per node). If a node hosting one or more of these services goes down, Docker swarm will move them to an active node.

 

Promote Models 

 

When a model is deployed to Promote, by default the containerized model is replicated on 2 servers (nodes) in the cluster. This means that if two requests are sent to the model for predictions, the requests will be processed separately and simultaneously the model are each node (in parallel). If a request is sent and there is not an available replication of the model, that request will queue until the model becomes available. Load balancing within Promote is conducted by the NGINX service. 

 

For example, if model replication = 4, and there are 3 servers, Promote will first allocate 1 replication to each of the 3 servers.  The 4th replication will be added to one of the servers randomly that has available space.

 

The Leader Node

 

In the Docker Swarm installed on the three required machines, all three machines are configured as managers, but one of the machines is designated as a Leader.

 

In addition to the Docker Services and Model replications that are present on each node in the Promote cluster, the Leader node also has containers for the Promote Web App and the Promote Database The Promote web application backend is written in node.js, and the front-end uses the React framework. The Promote database is a PostgreSQL database.

 

 promoteMachinesArch.png

 

 

Load Balancing

 

To have true high availability or failover capabilities, a third-party load balancer must be in place in front of Promote.  If there is not a third-party load balancer, there is still redundancy if one of the machines other than the leader goes down.  The leader and at least one other machine need to remain operational for Promote to function properly.  If the leader node does go down, the models will remain highly available on the other two machines but will require the 3rd party load balancer or DNS redirect to redirect requests to the proper machine.

 

 

LoadBalancing.png 

 

Promote in Action

 

Models are deployed from an individual contributor’s machine to the Promote cluster. The containerized model is then replicated on at least two machines, where they are housed and made available to return predictions for any prediction requests. The API of that model can then be easily embedded into an Application or Web Server to return near real-time predictions. 

 

 

 

 

 PromoteinAction.png

 

 

And that is what Promote looks like in Action! Hopefully, this has been a helpful and informative overview of Promote's Architecture. If you ever have any specific questions, please don't hesitate to reach out to your sales team.