Python – Celery production gracefully restarts

Celery production gracefully restarts… here is a solution to the problem.

Celery production gracefully restarts

I need to

restart the celery daemon, but I need it to tell the current worker to shut down when the task completes, and then start a new set of workers while the old worker is still shutting down.

The current graceful option of the daemon waits for all tasks to complete before restarting, which is not useful if you have long-running jobs.

Please do not recommend autoreload as it is not currently documented in 4.0.2.

Solution

Well, what I ended up doing was using supervisord and ansible to manage it.

[program:celery_worker]
# Max concurrent task you wish to run.
numprocs=5
process_name=%(program_name)s-%(process_num)s
directory=/opt/worker/main
# Ignore this unless you want to use virtualenvs.
environment=PATH="/opt/worker/main/bin:%(ENV_PATH)s"
command=/opt/worker/bin/celery worker -n worker%(process_num)s.%%h --app=python --time-limit=3600 -c 5 -Ofair -l debug --config=celery_config -E
stdout_logfile=/var/log/celery/%(program_name)s-%(process_num)s.log
user=worker_user
autostart=true
autorestart=true
startretries=99999
startsecs=10
stopsignal=TERM
stopwaitsecs=7200
killasgroup=false

You can use the supervisor to stop/start workers to load new code, but it will wait for all workers to stop before restarting them, which is not good for long-running jobs. It’s best to just terminate MainProcesses, which will tell the worker to stop accepting work and close it when it’s done.

ps aux | grep *celery.*MainProcess | awk '{print $2}' | xargs kill -TERM

The supervisor will restart them when they die.

Of course, it’s nearly impossible to update dependencies without completely stopping all workers, which makes a good case for using something like docker. 😉

Related Problems and Solutions