This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Now that it's live, don't forget to accept your certification badge on Credly today! Learn more here.
We are currently experiencing an issue with Email verification at this time and working towards a solution. Should you encounter this issue, please click on the "Send Verification Button" a second time and the request should go through. If the issue still persists for you, please email firstname.lastname@example.org for assistance.
Promote is built around a software called Docker, which is used for operating system-level virtualization, aka “containerization.” Each model published to your Promote instance is housed and runs from inside its own Docker container (and replicated across multiple machines for redundancy). That container is based upon a custom image that is built for each model.
That’s the headline, but if you’ve never heard of Docker, or aren’t particularly familiar with it, you are probably wondering “what’s the deal with Docker?”
Docker was first released in 2013 and quickly became popular because it allows users to deploy applications into a production environment in a lightweight, efficient way.
To get a handle on Docker, first, let’s take a moment to think about computers. Specifically, think about the computer you use at work. It has different hardware components, an operating system, and different software installations that help you do your job. Now think about your neighbor’s computer. Is it the same as yours? Identical? Maybe, but probably not. Now expand that thinking to your entire department, and then your entire company, including server machines, in-house clusters, cloud services, etc. As you expand outward, the machines become more diverse.
Now let’s imagine that you wrote a really awesome Python script, and you need to start sharing it. In order to know that the script will run on other machines the same way it does on your machine, there are certain items other than the script itself that you will need to make sure the destination machines have. You’ll need to make sure that the people you are sharing your script with have the right version of Python (are they on Python 2 or Python 3?), a compatible libc distribution, the same Python packages that the script requires, possibly even down to matching package versions, and so on.
Deploying an application can get very complicated very quickly.
In the days before Docker, virtual machines (VMs) were sometimes used to work around the issues of compatibility and dependencies. The thing about VMs is that they are bulky, and not very application specific. With a VM, you are essentially creating a mini computer.
Docker containers occupy the deployment space between VMs and a single script or application. Containers allow a developer to ship out an application with all the components it might need. Containers are also isolated from each other and bundle their own tools, libraries, and configuration files, so if you have applications that require different versions of the same software or packages, they can exist in harmony on the same machine in their own containers. In addition to these benefits, a container is lightweight enough that is can easily be shipped to other machines while knowing that the application will run exactly as expected, regardless of the operating system or other configurations of the destination machine. The lightweight nature of Docker containers makes them scalable as a method for deploying applications.
Docker containers are based on Docker images, which are files comprised of multiple layers that describe the components that make up a container. Images are composed to include system libraries, tools, as well as other files and dependencies that are required to execute the application or model's code. The images are split up into layers because image developers can reuse the individual layers to create different images for different projects. Containers arean instance of an image. Multiple containers can be instantiated from a single image.
Promote leverages Docker containers in a Docker swarm to solve for the pain points of deploying a data science model. By packaging models in containers, models can be put into production without worrying about the mess of managing dependencies or rewriting code from its native language.In Promote, each model runs inside of its very own Docker container, and that container is based on a custom image for that model.
And that's the deal with Docker in Promote.If you are interested in learning more about the rapidly growing world of Docker, please see these resources: