How to configure Celery
This article assumes that you are already familiar with the basics of using Celery with Django and that you have Celery installed in your application you want to deploy on Divio.
If not, please see Celery's documentation.
Add a Celery service to your application
In the application's subscription, add the number of Celery workers you require. You can start with just one and add more later if required.
Note that aside from Test and Live environments, Celery has to be requested for each additional environment separately.
If your environments have not yet been deployed, please deploy each of them. This is required before Celery can be provisioned on the application.
Celery will then be provisioned for your application's environments by our infrastructure team. This includes configuration of new environment variables it will need.
Once provisioned and deployed, your cloud application will run with new Docker instances for the Celery workers. The containers running Celery components use the same image as the web container, but are started up with a different command.
We provide various cloud containers for Celery:
- Celery worker containers (multiple containers, according to the subscription)
- a Celery beat container, to handle scheduling
- a Celery camera, to provide snapshots for monitoring
Note that if your Divio application is on a plan that pauses due to inactivity, this will also pause the Celery containers.
Application configuration
Your application needs configuration to:
- read the environment variables we supply and use them as values for configuring Celery
- start up each Celery container correctly
- for local development, run Celery's workers in their own containers to replicate the cloud configuration
These tasks are covered in order below.
We have a quickstart application with a Celery setup ready to use or to be used as a reference for your changes. You can find it here: https://github.com/divio/django-celery-divio-quickstart.
Using the broker environment variable
For Celery, we provide a DEFAULT_AMQP_BROKER_URL
(the same value is also available as BROKER_URL
, provided for
legacy Aldryn Celery applications). This provides configuration details for the AMQP message queue that handles Celery
tasks. It's in the form:
transport://userid:password@hostname:port/virtual_host
This configuration will need to be passed to Celery for its broker settings (CELERY_BROKER_URL
,
for Django).
For applications using Aldryn Celery
Aldryn Celery will take care of configuration. See Aldryn Celery (legacy) below.
Starting the cloud containers
As noted above, these containers are all instances of the same application image, but are started up by different commands.
For the worker and scheduling containers, your application needs an executable at /usr/local/bin/aldryn-celery
, containing:
#!/bin/sh
if [ $1 = "beat" ] ; then
celery -A path.to.celery.app beat --loglevel=INFO
else
celery -A path.to.celery.app worker --concurrency=4 --loglevel=INFO --without-gossip --without-mingle --without-heartbeat -Ofair
fi
Note the paths that you will need to specify yourself.
Similarly, on deployment the infrastructure invokes (by default) a Django management command python manage.py celerycam
to
start up the monitoring container.
- If you don't want to use a monitoring container, please inform us, so that we can configure your application to start up without issuing the command (deployments will fail if the command fails).
- If you do want to use a monitoring container, you will need to add a
celerycam
management command to your application. The command needs to respond to the invocation:python manage.py celerycam --frequency=10 --pidfile=
.
For an example of a celerycam
management command implementation, see how Aldryn Celery does this via the
djcelery.snapshot.Camera
class from the Django Celery library.
These entrypoints will be improved in future for developer convenience.
For applications using Aldryn Celery
If using Aldryn Celery, an executable /usr/local/bin/aldryn-celery
is provided.
Similarly, a celerycam
management command is implemented.
No further action is required on your part.
See Aldryn Celery (legacy) below.
Configure Celery for the local environment
For development purposes you will need to set up Celery in your local environment too, in such a way that it reflects the provision made on our cloud. A complete set-up would include:
function | handled by | on the cloud | local container name |
---|---|---|---|
AMPQ message broker service responsible for the creation of task queues | RabbitMQ | CloudAMPQ | rabbitmq |
task execution | Celery workers | Celery containers | celeryworker |
scheduling | Celery beat | Celery beat container | celerybeat |
monitoring | Celery snapshots | Celery camera container | celerycam |
Locally, the four new containers will be set up as new services using the docker-compose.yml file.
Note that in the cloud environment, the Celery-related containers are launched automatically. They, and the AMPQ message queue, are
not directly accessible. All monitoring and interaction must be handled via the main application running in the web
container(s). The docker-compose.yml file is not used on the cloud.
Your application will already have other services listed in its docker-compose.yml
. Each of the new services will
need to be added in a similar way.
RabbitMQ
Set up the RabbitMQ messaging service, by adding the following lines:
services:
web: [...]
database_default: [...]
rabbitmq:
image: rabbitmq:3.9-management
hostname: rabbitmq
ports:
- '15672:15672'
expose:
- '15672'
This uses the official Docker RabbitMQ image (the
rabbitmq:3.9-management
image in turn installs rabbitmq:3.9
). It also gives the container a hostname
(rabbitmq
), maps and exposes the management interface port (15672
).
Celery worker
Next add a Celery worker service in the same way. This service needs to run a Django environment almost identical to
that used by the web
service, as it will use the same codebase, need access to the same database and so on. Its
definition will therefore be very similar, with key changes noted here:
celeryworker:
build: '.'
links:
- 'database_default'
- 'rabbitmq:rabbitmq'
volumes:
- '.:/app:rw'
- './data:/data:rw'
command: <startup command>
env_file: .env-local
Rather than copying the example above, use the actual web
service in your docker-compose.yml
file as its basis, in
case it contains other values that need to be present. There's no need for the ports
option.
You will need to provide a <startup command>
based on the one used to start up the cloud workers.
For applications using Aldryn Celery, use command: aldryn-celery worker
.
Celery beat
Celery beat needs to be set up in much the same way:
celerybeat:
build: '.'
links:
- 'database_default'
- 'rabbitmq:rabbitmq'
volumes:
- '.:/app:rw'
- './data:/data:rw'
command: <startup command>
env_file: .env-local
You will need to provide a <startup command>
based on the one used to start up the cloud scheduler.
For applications using Aldryn Celery, use command: aldryn-celery beat
.
Celery cam
And Celery cam:
celerycam:
build: '.'
links:
- 'database_default'
- 'rabbitmq:rabbitmq'
volumes:
- '.:/app:rw'
- './data:/data:rw'
command: aldryn-celery cam
env_file: .env-local
You will need to provide a <startup command>
based on based on the one used to start up the cloud monitoring container., e.g. python manage.py celerycam --frequency=10 --pidfile=
.
For applications using Aldryn Celery, use command: aldryn-celery cam
.
The web
service
Finally, to the links
option in web
, you also need to add the link to rabbitmq
:
web:
[...]
links:
[...]
- "rabbitmq:rabbitmq"
Set up local environment variables
In .env-local
add:
DEFAULT_AMQP_BROKER_URL="amqp://guest:guest@rabbitmq:5672/"
For legacy Aldryn Celery applications, name the environment variable BROKER_URL
instead of DEFAULT_AMQP_BROKER_URL
.
Port 5672
of the RabbitMQ server should not be confused with port 15672
of its management interface.
Run the local application
Build the newly-configured application:
docker compose build
Now docker compose up
will start the services that Celery requires.
Note that although the Django runserver in your web
container will restart automatically to load new code whenever
you make changes, that will not apply to the other services.
These will need to be restarted manually, for example by stopping and restarting the local application or by running
docker compose restart
. (Usually, only the celeryworker
container needs to be restarted, so you can do
docker compose restart celeryworker
.)
If you make any local changes to a application's configuration that need to be accessible to the Celery workers, run
docker compose build
to rebuild them.
Environment variables
When Celery is enabled for your application, a new environment variable DEFAULT_AMQP_BROKER_URL
will be configured.
(It's also provided as BROKER_URL
for legacy Aldryn Celery applications.)
The environment variable will have different values in different cloud environments.
The number of Celery workers per Docker instance can be configured with the
CELERYD_CONCURRENCY
environment variable. The default is 2. This can be
increased, but in that case, you will need to monitor your own RAM consumption
via the Control Panel.
For applications using Aldryn Celery
Other environment variables used by Aldryn Celery can be found in its aldryn_config.py.
Aldryn Celery (legacy)
Aldryn Celery is an Aldryn Addon wrapper application that installs and configures Celery in your application, exposing multiple Celery settings as environment variables for fine-tuning its configuration.
Aldryn Celery installs components including Celery itself and Django Celery. The addon is no longer updated, and installs an older version of Celery. Applications currently using Aldryn Celery will eventually need to be updated to maintain compatibility with other dependencies of the application.