docker-compose.yml is used exclusively for local project set-up
In the Divio project architecture, the
docker-compose.yml file is not used for cloud deployments, but
only for configuration of the local environment. On the cloud, the deployment is taken care of by dedicated
systems on our servers.
This means that entries in or changes to
docker-compose.yml will not affect cloud deployments in any way.
In order to do something useful with containers, they have to be arranged - orchestrated - as part of a project, usually referred to as an ‘application’.
There are multiple ways of orchestrating a Docker application, but Docker Compose is probably the most human-friendly. It’s what we use for our local development environments.
To configure the orchestration, Docker Compose uses a
docker-compose.yml file. It specifies what images are
required, what ports they need to expose, whether they have access to the host filesystem, what commands should be run
when they start up, and so on.
Services defined in
docker-compose.yml file, services represent the containers that will be created in the application.
When you create a new Divio project using one of our quickstart repositories or one of
defined project types, it will include a
docker-compose.yml file ready for local use, with the services already
If you start with a Build your own project type, you will need to assemble the
docker-compose.yml file yourself.
This is a fairly straightforward process once you know what you are doing. Our How to deploy a web application guide includes steps for creating a complete
docker-compose.yml file from scratch.
For a working local project, various things need to be defined in the file. In a Divio project, there will be a
service, that’s built in a container using the
Dockerfile. There will typically also be a
db service, from a
postgres or other database image.
Most Divio projects will use a
docker-compose.yml that contains entries along these lines.
web: build: . links: - "database_default" ports: - "8000:80" volumes: - ".:/app:rw" - "./data:/data:rw" command: python manage.py runserver 0.0.0.0:80 env_file: .env-local database_default: image: postgres:9.6 environment: POSTGRES_DB: "db" POSTGRES_HOST_AUTH_METHOD: "trust" SERVICE_MANAGER: "fsm-postgres" volumes: - ".:/app:rw"
Some projects will have additional services (such as Celery for example) defined.
Let’s look at the components of the file more closely.
The application container service,
The first definition in the file is for the
web service. In order, the
build: build it from the
Dockerfilein the current directory
links: a link to the database container (
ports: map the external port 8000 to the internal port 80
.:/app:rwmaps the parent directory on the host to
/appin the container, with read and write access
datadirectory on the host to
/datain the container, with read and write access
command: by default, when the command
docker-compose runis issued, execute
python manage.py runserver 0.0.0.0:80(this will override the
CMDinstruction in the
env_file: use the
.env-localto supply environment variables to the container
When you execute a
docker-compose command, the
volumes directive in
docker-compose.yml file mounts source
directories or volumes from your computer at target paths inside the container. If a matching target path exists
already as part of the container image, it will be overwritten by the mounted path.
volumes: - ".:/app:rw" - "./data:/data:rw"
will mount the entire project code (at the relative path
.) as the
/app directory inside the container, even
if there was already an
/app directory there*, in read-write mode (i.e. the container can write as well as
read files on the host).
This allows you to make changes to the project from your computer during the local development process, that will be picked up by project inside Docker. These changes will be available to the project only as long as the host directory is mounted inside the container. In order to be made permanent, they need to be committed into the repository so that they will be picked up when the image and container are rebuilt.
Implications for local testing
Nearly everything in
/app in the container is also present in the project repository and thus on the host
machine. This means that it is safe to replace the container’s
/app files with those from the host.
However, any files in
/app that are placed there during the build process, i.e. the execution of the
Dockerfile, will not be available in the local environment. For a standard Django project, these will
the compiled pip requirements, in
collected static files, in
In most cases, this will not matter, but sometimes these files are required in local development. For example, the
requirements.txt may contain useful information about dependency relationships, or the
Dockerfile may have
performed custom processing of static files.
In that case, the
- ".:/app:rw" line can be commented out in
docker-compose.yml. In this case, the
container will use the files baked into the image, and will not use the local host’s files.
This will allow local configuration to replicate the cloud environment even more closely.
Environment variables are loaded from a file, specified by:
The database container service,
The second definition is for the
On the cloud, the project’s database runs on one of our database clusters; locally, it runs on a Postgres instance in
The directives mean:
image: build the container from the
volumes: map the parent directory on the host to
/appin the container, with read and write access
environment: sets various environment variables for the running container. The
SERVICE_MANAGERvariable provides information about the database service so that the Divio CLI can handle it correctly (
fsm-mysqlare currently supported).
See Expose the database’s port to the host for an example of adding configuration to
Required database service configuration
The Divio CLI expects that the database service will be called
database_default (or, in some older projects,
db). If the name is changed, operations such as
divio project pull db will fail.
volumes directive needs to map the container’s
/app directory as described above, for the same reason.
Our Django tutorial is strongly recommended as a way to learn how a
docker-compose.yml file can be built from scratch to suit your needs.
The How to configure Celery section describes adding additional services in Docker Compose for a more complex local set-up.