This document will take you step-by-step through the tasks required to set up a portable, vendor-neutral application, for deployment to Divio using Docker. The application architecture we adopt is in line with Twelve-factor design principles.
Use the guide to help you adapt an existing application for Docker or check that your existing Docker application will run on Divio. The exact steps you need to take will depend on details of your application.
The steps here should work with any Flask application, and include configuration for:
Postgres or MySQL database
cloud media storage using S3
static file handling using WhiteNoise
configuring a gateway server.
Your application needs to be managed in a Git repository, hosted on Divio application or your preferred Git host.
You need to have at least a minimal working application ready to be deployed, whether it is Docker-ready already or not.
Dockerfile - define how to build the application#
The application needs a
Dockerfile at the root of repository, that defines how to build the
Dockerfile starts by importing a base image.
For a Flask application, you can use:
python:3.8 is the name of the Docker base image. We cannot advise on what base image you should use;
you’ll need to use one that is in-line with your application’s needs. However, once you have a working set-up, it’s
good practice to move to a more specific base image - for example
Docker base images
Install system-level dependencies#
Dockerfile needs to install any system dependencies required by the application. For example, if your chosen base image is Debian-based, you might run:
RUN apt-get update && apt-get install -y <list of packages>
We recommend setting up a working directory early on in the
Dockerfile before you need to write any files, for
# set the working directory
# copy the repository files to it
COPY . /app
Install application-level dependencies#
The next step is to install application-level dependencies.
# install dependencies listed in the repository's requirements file
RUN pip install -r requirements.txt
requirements.txt file should pin Python dependencies as firmly possible (use the output from
freeze to get a full list). You will probably need to include some of the following:
# Select one of the following for the database as required
# Select one of the following for the gateway server
Check that the version of Flask is correct, and include any other Python components required by your application.
If the application needs to perform any build operations to generate files, they should be run in the
can make use of frameworks that do this work.
EXPOSE informs Docker that the container listens on the specified ports at runtime; typically, you’d need:
Launch a server running on port 80 by including a
CMD at the end of the
# Select one of the following application gateway server commands
CMD uwsgi --http=0.0.0.0:80 --module="flaskr:create_app()"
CMD gunicorn --bind=0.0.0.0:80 --forwarded-allow-ips="*" "flaskr:create_app()"
ENTRYPOINT, to start the server
CMD provides a default way to start the server, that can also be overridden. This is useful when working
locally, where often we would use the
docker-compose.yml to issue a startup
command that is more suited to
development purposes. It also allows our infrastructure to override the default, for example in order to launch
containers without starting the server, when some other process needs to be executed.
ENTRYPOINT that starts the server would not allow this.
If you are using a Docker entrypoint script, it’s good practice to conclude it with
exec "$@", so that any
commands passed to the container will be executed as expected.
Access to environment and services#
During the build process, Docker has no access to the application’s environment or services.
This means you cannot run database operations such as database migrations during the build process. Instead, these should be handled later as release commands.
Configuring your application#
Your application will require configuration. You may be used to hard-coding such values in the application itself - though you can do this on Divio, we recommend not doing it. Instead, all application configuration (for access to services such as the database, security settings, etc) should be managed via environment variables.
Divio provides services such as database and media. To access them, your application will need the credentials. For each service in each environment, we provide an environment variable containing the values required to access it. This variable is in the form:
Your application should:
read the variables
parse them to obtain the various credentials they contain
configure its access to services using those credentials
We also provide environment variables for things like security configuration.
There are various Python helper module libraries available that can parse environment variables to extract the settings so that you can apply them to the application.
Typically, an application’s security settings will depend upon multiple variables. Some that are typically needed are provided by Divio’s cloud environments. For example, your application is likely to need information about:
a random secret key, provided by SECRET_KEY
Other variables specific to the application will need to be applied manually for each environment.
Database credentials, if required, are provided in a
DATABASE_URL environment variable. When a database (and
therefore the environment variable) are not available (for example during the Docker build phase) the application
should fall back safely, to a null database option or to an in-memory database (e.g. using
See the Django guide for a concrete example.
Your own application should do something similar if it needs to use the database.
There are numerous options for static file serving. You can opt to serve them directly from the application, or to configure the web server/gateway server to handle them.
If your application needs to handle generated or user-uploaded media, it should use a media object store.
Media credentials are provided in
DEFAULT_STORAGE_DSN. See how to parse the storage DNS to obtain the credentials for the media object stores we provide. Your application needs
to be able to use these credentials to configure its storage.
When working in the local environment, it’s convenient to use local file storage instead (which can be also be configured using a variable provided by the local environment).
Modern application frameworks such as Django make it straightforward to configure an application to use storage on multiple different backends, and are supported by mature libraries that will let you use a Divio-provided S3 or MS Azure storage service as well as local file storage.
Local file storage is not a suitable option
Your code may expect, by default, to be able to write and read files from local file storage (i.e. files in the same file-space as belonging to the application).
This will not work well on Divio or any similar platform. Our stateless containerised application model does not provide persistent file storage. Instead, your code should expect to use a dedicated file storage; we provide AWS S3 and MS Azure blob storage options.
You may need to make use of other variables for your application - take every opportunity to make use of the provided variables so that your codebase can contain as little as configuration as possible.
Local container orchestration with
What’s described above is fundamentally everything you need in order to deploy your application to Divio. You could deploy your application with that alone.
However, you would be missing out on a lot of value. Being able to build and then run the same application, in a very similar environment, locally on your own computer before deploying it to the cloud makes development and testing much more productive. This is what we’ll consider here.
docker-compose.yml is only used locally
Cloud deployments do not use Docker Compose. Nothing that you do here will affect the way your application runs in a cloud environment. See docker-compose.yml.
docker-compose.yml file, for local development purposes. This will replicate
web image used in cloud deployments, allowing you to run the application in an environment as close to that of
the cloud servers as possible. Amongst other things, it will allow the application to use a Postgres or MySQL database
(choose the appropriate lines below) running in a local container, and provides convenient access to files inside the
Take note of the highlighted lines below; some require you to make a choice.
# the application's web service (container) will use an image based on our Dockerfile
# map the internal port 80 to port 8000 on the host
# map the host directory to app (which allows us to see and edit files inside the container)
# the default command to run whenever the container is launched
command: flask run --host=0.0.0.0 --port=80
# a link to database_default, the application's local database service
# Select one of the following db configurations for the database
test: "/usr/bin/mysql --user=root -h 127.0.0.1 --execute \"SHOW DATABASES;\""
Local configuration using
As you will see above, the
web service refers to an
env_file containing the environment variables that will be
used in the local development environment.
If the application refers to its environment for variables to configure database, storage or other services, it will need to find those variables even when running locally. On the cloud, the variables will provide configuration details for our database clusters, or media storage services. Clearly, you don’t have a database cluster or S3 instance running on your own computer, but Docker Compose can provide a suitable database running locally, and you can use local file storage while developing.
.env-local file. In this you need to provide some environment variables that are suitable for the
local environment. The example below assumes that your application will be looking for environment variables to
configure its access to a Postgres or MySQL database, and for local file storage:
FLASK_APP variable is used by the
flask run command. It assumes that your application can be found at
flaskr; amend this appropriately if required.
# Select one of the following for the database
With this, you have the basics for a Dockerised application that can equally effectively be deployed in a production environment or run locally, using environment variables for configuration in either case.
Building and running#
Build with Docker#
Now you can build the application containers locally:
Check the local site#
You may need to perform additional steps such as migrating a database. To run a command manually inside the Dockerised
environment, precede it with
docker-compose run web. (For example, to run Django migrations:
web python manage.py migrate.)
To start up the site locally to test it:
Access the site at http://127.0.0.1:8000/. You can set a different port in the
ports option of
Your code needs to be in a Git repository in order to be deployed on Divio.
You will probably want to exclude some files from the Git repository, so check your
.gitignore and ensure that
nothing will be committed that you don’t want included.
If using the suggestions above, you’ll probably want:
# OS-specific patterns - add your own here
Your application is ready for deployment on our cloud platform. The basic steps are:
create an application on the Divio Control Panel, with any required services
push your code/connect your Git repository
deploy one or more cloud environments
These steps are covered in more detail in Deploy your application to Divio.