Skip to main content

How to configure an existing web application for deployment on Divio: generic guide

This document will take you step-by-step through the tasks required to set up a portable, vendor-neutral application, for deployment to Divio using Docker. The application architecture we adopt is in line with Twelve-factor design principles.

Use the guide to help you adapt an existing application for Docker or check that your existing Docker application will run on Divio. The exact steps you need to take will depend on details of your application.

The steps outline here will work for an application based on any suitable framework and language.


  • Your application needs to be managed in a Git repository, hosted on Divio application or your preferred Git host.
  • You need to be familiar with the basics of the Divio platform and Docker, and have Docker and the Divio CLI installed on your machine. If not, please follow one of our tutorials.
  • You need to have at least a minimal working application ready to be deployed, whether it is Docker-ready already or not.

The Dockerfile - define how to build the application

The application needs a Dockerfile at the root of repository, that defines how to build the application. The Dockerfile starts by importing a base image.

For a Python application for example, you can use:

FROM python:latest

Here, python:latest is the name of the Docker base image. We cannot advise on what base image you should use; you'll need to use one that is in-line with your application's needs. However, once you have a working set-up, it's good practice to move to a more specific base image - for example python:latest-slim-buster.

We recommend setting up a working directory early on in the Dockerfile before you need to write any files, for example:

# set the working directory
# copy the repository files to it
COPY . /app

Install system-level dependencies

The Dockerfile needs to install any system dependencies required by the application. For example, if your chosen base image is Debian-based, you might run:

RUN apt-get update && apt-get install -y <list of packages>


We recommend setting up a working directory early on in the Dockerfile before you need to write any files, for example:

# set the working directory
# copy the repository files to it
COPY . /app

Install application-level dependencies

The next step is to install application-level dependencies.

For example, in a Python application, you could use:

# install dependencies listed in the repository's requirements file
RUN pip install -r requirements.txt

Any requirements should be pinned as firmly as possible.

As well as pinning known requirements, it's a good idea to pin all their secondary dependencies too. The language environment you're using probably has a way to do this.

For example, in Python you can run pip freeze to get a definitive list of dependencies, or do something similar in Node with npm shrinkwrap.

File-building operations

If the application needs to perform any build operations to generate files, they should be run in the Dockerfile so that they are built into the image. This could include compiling or collecting JavaScript or CSS, for example, and can make use of frameworks that do this work.


EXPOSE informs Docker that the container listens on the specified ports at runtime; typically, you'd need:


Launch a server running on port 80 by including a CMD at the end of the Dockerfile.

For example, for a Python Flask application you might use something like:

CMD gunicorn --bind= --forwarded-allow-ips="*" "flaskr:create_app()"
Use CMD, not ENTRYPOINT, to start the server

Using CMD provides a default way to start the server, that can also be overridden. This is useful when working locally, where often we would use the docker-compose.yml to issue a startup command that is more suited to development purposes. It also allows our infrastructure to override the default, for example in order to launch containers without starting the server, when some other process needs to be executed.

An ENTRYPOINT that starts the server would not allow this.

If you are using a Docker entrypoint script, it's good practice to conclude it with exec "$@", so that any commands passed to the container will be executed as expected.

Access to environment and services


During the build process, Docker has no access to the application's environment or services.

This means you cannot run database operations such as database migrations during the build process. Instead, these should be handled later as release commands.

Configuring your application

Your application will require configuration. You may be used to hard-coding such values in the application itself - though you can do this on Divio, we recommend not doing it. Instead, all application configuration (for access to services such as the database, security settings, etc) should be managed via environment variables.

Divio provides services such as database and media. To access them, your application will need the credentials. For each service in each environment, we provide an environment variable containing the values required to access it. This variable is in the form:

schema://<user name>:<password>@<address>:<port>/<name>

Your application should:

  • read the variables
  • parse them to obtain the various credentials they contain
  • configure its access to services using those credentials

We also provide environment variables for things like security configuration.

Helper modules

Your chosen framework may already have helper module libraries available that can parse environment variables to extract the settings and apply them to the application (most mature and widely-used frameworks do). If not, you will need to parse the variables yourself.

Security settings

Typically, an application's security settings will depend upon multiple variables. Some that are typically needed are provided by Divio's cloud environments. For example, your application is likely to need information about:

Other variables specific to the application will need to be applied manually for each environment.


Database credentials, if required, are provided in a DATABASE_URL environment variable. When a database (and therefore the environment variable) are not available (for example during the Docker build phase) the application should fall back safely, to a null database option or to an in-memory database (e.g. using sqlite://:memory:).

Static files

There are numerous options for static file serving. You can opt to serve them directly from the application, or to configure the web server/gateway server to handle them.

If your application framework can handle file serving with a reasonable degree of efficiency, in most cases it is perfectly adequate to serve them from the application, at least to begin with.


If your application needs to handle generated or user-uploaded media, it should use a media object store.

Media credentials are provided in DEFAULT_STORAGE_DSN. See how to parse the storage DNS to obtain the credentials for the media object stores we provide. Your application needs to be able to use these credentials to configure its storage.

When working in the local environment, it's convenient to use local file storage instead (which can be also be configured using a variable provided by the local environment).

Modern application frameworks such as Django make it straightforward to configure an application to use storage on multiple different backends, and are supported by mature libraries that will let you use a Divio-provided S3 or MS Azure storage service as well as local file storage.

Local file storage is not a suitable option

Your code may expect, by default, to be able to write and read files from local file storage (i.e. files in the same file-space as belonging to the application).

This will not work well on Divio or any similar platform. Our stateless containerised application model does not provide persistent file storage. Instead, your code should expect to use a dedicated file storage; we provide AWS S3 and MS Azure blob storage options.

Other values

You may need to make use of other variables for your application - take every opportunity to make use of the provided variables so that your codebase can contain as little as configuration as possible.

Local container orchestration with docker-compose.yml

What's described above is fundamentally everything you need in order to deploy your application to Divio. You could deploy your application with that alone.

However, you would be missing out on a lot of value. Being able to build and then run the same application, in a very similar environment, locally on your own computer before deploying it to the cloud makes development and testing much more productive. This is what we'll consider here.

docker-compose.yml is only used locally

Cloud deployments do not use Docker Compose. Nothing that you do here will affect the way your application runs in a cloud environment. See docker-compose.yml file.

Create a docker-compose.yml file, for local development purposes. This will replicate the web image used in cloud deployments, allowing you to run the application in an environment as close to that of the cloud servers as possible. Amongst other things, it will allow the application to use a Postgres or MySQL database (choose the appropriate lines below) running in a local container, and provides convenient access to files inside the containerised application.

Take note of the highlighted lines below; some require you to make a choice.

version: "2.4"
# the application's web service (container) will use an image based on our Dockerfile
build: "."
# map the internal port 80 to port 8000 on the host
- "8000:80"
# map the host directory to app (which allows us to see and edit files inside the container)
# /app assumes you're using that in the Dockerfile
# /data is suggestion for local media storage - see below
- ".:/app:rw"
- "./data:/data:rw"
# an optional default command to run whenever the container is launched - this will override the Dockerfile's
# CMD, allowing your application to run with a server suitable for development - this example is for Django
command: python runserver
# a link to database_default, the application's local database service
- "database_default"
env_file: .env-local

# Select one of the following configurations for the database
image: postgres:13.5-alpine
SERVICE_MANAGER: "fsm-postgres"
- ".:/app:rw"

image: mysql:5.7
SERVICE_MANAGER: "fsm-mysql"
- ".:/app:rw"
- "./data/db:/var/lib/mysql"
test: "/usr/bin/mysql --user=root -h --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10

Local configuration using .env-local

As you will see above, the web service refers to an env_file containing the environment variables that will be used in the local development environment.

Divio cloud applications include a number of environment variables as standard. In addition, user-supplied variables may be applied per-environment.

If the application refers to its environment for variables to configure database, storage or other services, it will need to find those variables even when running locally. On the cloud, the variables will provide configuration details for our database clusters, or media storage services. Clearly, you don't have a database cluster or S3 instance running on your own computer, but Docker Compose can provide a suitable database running locally, and you can use local file storage while developing.

Create a .env-local file. In this you need to provide some environment variables that are suitable for the local environment. The example below assumes that your application will be looking for environment variables to configure its access to a Postgres or MySQL database, and for local file storage:

# Select one of the following for the database

# Storage will use local file storage in the data directory

In cloud environments, we provide a number of useful variables. If your application needs to make use of them you should provide them for local use too. For example:


With this, you have the basics for a Dockerised application that can equally effectively be deployed in a production environment or run locally, using environment variables for configuration in either case.

Building and running

Build with Docker

Now you can build the application containers locally:

docker-compose build

Check the local site

You may need to perform additional steps such as migrating a database. To run a command manually inside the Dockerised environment, precede it with docker-compose run web. (For example, to run Django migrations: docker-compose run web python migrate.)

To start up the site locally to test it:

docker-compose up

Access the site at You can set a different port in the ports option of docker-compose.yml.


Your code needs to be in a Git repository in order to be deployed on Divio.

You will probably want to exclude some files from the Git repository, so check your .gitignore and ensure that nothing will be committed that you don't want included.

If using the suggestions above, you'll probably want:

# used by the Divio CLI

# for local file storage


Your application is ready for deployment on our cloud platform. The basic steps are:

  • create an application on the Divio Control Panel, with any required services
  • push your code/connect your Git repository
  • deploy one or more cloud environments

These steps are covered in more detail in Deploy your application to Divio.