Skip to content
Advertisement

How to configure supervisor in docker correctly

I have a working laravel environment using docker. my projects has multiple services in different container such as redis, mongodb, mysqldb and nodejs. I want to use supervisor on my project to interact with redis for the queues and php to run the job. I have done some testing and research but I really can’t make it work.

so here is my DockerFile:

JavaScript

and my docker-compose.yml file

JavaScript

you might also want to see my supervisord.conf

JavaScript

so from that setup. when the containers is UP it seems that supervisord is not working because if I run php artisan horizon manually on my php container the queuing works perfectly. btw horizon is the tool i use for queuing.

and then I also try to run supervisorctl on my php container and I got this error unix:///var/run/supervisord.sock no such file

so I’m just pretty new to docker just started few months ago. I do know how to configure supervisord on linux but i can’t make it work on docker.

so please pardon my stupidity 🙂

Advertisement

Answer

The idea here is to eliminate the supervisor and instead run whatever the supervisor used to run in several different containers. You can easily orchestrate this with docker-compose, for example, all running the same container with different CMD overrides, or the same container with a different CMD layer at the end to split it out. The trouble here is the supervisor won’t be able to communicate the status of the processes it manages to Docker. It will always be “alive” even if all of its processes are completely trashed. Exposing those directly means you get to see they crashed.

What’s best is to break out each of these services into separate containers. Since there’s official pre-built ones for MySQL and so on there’s really no reason to build one yourself. What you want to do is translate that supervisord config to docker-compose format.

With separate containers you can do things like docker ps to see if your services are running correctly, they’ll all be listed individually. If you need to upgrade one then you can do that easily, you just work with that one container, instead of having to pull down the whole thing.

The way you’re attacking it here is treating Docker like a fancy VM, which it really isn’t. What it is instead is a process manager, where these processes just so happen to have pre-built disk images and a security layer around them.

Compose your environment out of single-process containers and your life will be way easier both from a maintenance perspective, and a monitoring one.

If you can express this configuration as something docker-compose can deal with then you’re one step closer to moving to a more sophisticated management layer like Kubernetes which might be the logical conclusion of this particular migration.

User contributions licensed under: CC BY-SA
6 People found this is helpful
Advertisement