Skip to main content

Deployment

The deployment process

The deployment of an image runs the following process in sequence, and can be monitored in the logs:

  1. Worker: First, we need a Divio Worker to build and deploy the image.
  2. Provisioning: Once the Divio Worker is available, the services defined for the environment are provisioned.
  3. Build: Next, the image is built from the Dockerfile using docker build.
  4. Release Image: A release image is created based on the build image with all environment variables added.
  5. Release Commands: The release commands defined in the Settings section are executed within a container running from the release image.
  6. Scaling: A number of new containers using the release image are created according to the subscription plan.
  7. Sanity Tests: The sanity tests are executed on each container to make sure it is healthy. To determine the port to probe, we check the Port defined in the Settings, or fallback to the port exposed in the Docker image.
  8. Finalizing: If all tests pass, the new containers start receiving traffic and the old containers are disposed of.

Deployment steps

1. Worker – Waiting for a free worker

The Divio Cloud Platform uses Divio Workers to build and deploy your applications. Typically, this message only appears for a few seconds. Longer queue times indicate a surge in deployments but a worker will become available eventually.

2. Provisioning – Provisioning backing services

Once a Divio Worker is available, the Divio Cloud Platform checks that required services (such as the database) are available and attaches them accordingly. The applications's Git repository is checked out into a working directory.

3. Build – Building docker image

Docker builds the application image from the Dockerfile. Once successfully built, the image is tagged so that it can be re-used.

The build stage does not have access to environment variables. If environment variables are required as part of the build process, use the ENV command to supply them via the Dockerfile

4. Release Image – Building docker release image

A release image is created from the build image, incorporating all necessary environment variables. This process involves taking the finalized build image, which contains the compiled and configured application, and adding the environment-specific configurations.

5. Release Commands – Running migrations and optimising release

A container is launched from the image, and any release commands applied will be executed.

(In the case of a legacy Aldryn Django application, the MIGRATION_COMMANDS setting also applies release commands. This setting can be populated automatically by Aldryn Addons.)

6. Scaling – Scaling (first host)

New containers are launched in parallel, according to the number specified in the application's subscription.

The application controller tests each container for an HTTP response, for up to 300 attempts.

  • Each connection test times out after 0.4 seconds.
  • Once a connection has been established, the test for a positive response (1xx, 2xx, 3xx, 4xx or 500); this times out after 20 seconds.

If any container fails to respond in time or responds with a 5xx server error other than 500, the deployment fails.

7. Sanity Tests – Running sanity tests on the client application

During the deployment process, the Divio Cloud Platform performs sanity tests on the application to ensure it is up and running. These tests are executed on each container to verify its health. To determine the port to probe, we check the Port defined in the Settings. If no port is specified, we fall back to the port exposed in the Docker image.

Technically, we perform a request.head and check the returned status_code. This process includes several retries before timing out. If the server responds with a server error or if a timeout occurs, we mark the deployment as failed.

8. Finalizing – Almost there

As the final step of the deployment process, we perform a thorough cleanup to remove any intermediate containers and unused images. This ensures that the deployment environment remains tidy and efficient, with no leftover artifacts that could consume resources or cause potential conflicts.

Zero-downtime cloud deployments

If all of the steps above are successful, then the deployment is marked as successful, and requests will be routed to the new containers, and the old containers will be shut down. Running containers are never shut down until the new containers are able to respond to requests without errors. This allows us to provide zero-downtime deployments. In the event of a deployment failure, the old containers will simply continue running without interruption.

Differences between cloud deployment and local builds

  • Orchestration: on the cloud, orchestration is managed by the Control Panel, while locally it is managed by docker compose according to the docker-compose.yml file.
  • Services: on the cloud, backing services such as the database and media storage - and if appropriate, optional services such as a message queue - are provided from our cloud infrastructure. Locally, these must be handled differently (for example, your computer doesn't contain a Postgres cluster or S3 bucket): the database will be provided in a separate Docker container, the media storage will be handled via local file storage, and so on. docker-compose.yml will configure this local functionality.
  • Docker layer caching: is not used on the cloud, locally Docker Layer Caching is used. It's recommended to write a Dockerfile with caching in mind for local performance.
  • Release commands: are not run locally, they need to be executed manually.

Notes on Docker image building

Docker image/layer caching and re-use

Images and image layers are:

  • not cached in cloud deployments
  • cached by default in local builds

Git repository

During the build step, your application repository is used without .git folder, so git history is unavailable.

Cloud deployments

We don't use Docker-level layer caching on the cloud because certain cases could produce unexpected results:

  • Unpinned installation commands might install cached versions of software, even where the user expects a newer version.
  • Commands such as apt-get upgrade in a Dockerfile could similarly fail to pick up new changes.
  • Our clustered setup means that builds take place on different hosts. As Docker layer caching is local to each host, this could mean that subsequent builds use different versions, depending on what is in each host's cache.

When an image is built, even if nothing in the repository has changed, the image may be different from the previously-built image. Typically, this can affect application dependencies. If an application's build instructions specify a component, the installer (which could be apt, pip or npm) will typically try to install the latest version of the component, unless a particular version is selected.

This means that if a new version has been released, the next deployment will use that - without warning, and with possibly unexpected results. It is therefore strongly recommended to pin package versions in your application's installation lists wherever possible to prevent this. (See also Pin all dependencies.)

Image re-use on the cloud

In some circumstances, the build process will not build a new image:

  • If there are no new commits in the repository, and an image has been built already for the Test server, that image will be re-used for the Live server.
  • When deploying a mirror application, the image already created for the original will be re-used.

Local builds

Locally, Docker will cache layers by default.

Local image caching can affect components that are subject to regular updates, such as Python packages installed with pip. In this case, a new version of a component may have been released, but the local build will continue to use an older version.

To turn off this behaviour, use the --no-cache option with docker compose build.