I have a working laravel environment using docker. my projects has multiple services in different container such as redis, mongodb, mysqldb and nodejs. I want to use supervisor on my project to interact with redis for the queues and php to run the job. I have done some testing and research but I really can’t make it work.
so here is my DockerFile:
FROM php:7.3-fpm # Copy composer.lock and composer.json COPY composer.lock composer.json /var/www/ # Set working directory WORKDIR /var/www # Install dependencies RUN apt-get update && apt-get install -y build-essential mariadb-client libpng-dev libzip-dev libjpeg62-turbo-dev libfreetype6-dev locales zip jpegoptim optipng pngquant gifsicle vim unzip git curl cron supervisor # Clear cache RUN apt-get clean && rm -rf /var/lib/apt/lists/* # Install extensions RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/ RUN docker-php-ext-install gd RUN docker-php-ext-configure bcmath --enable-bcmath RUN docker-php-ext-install bcmath # install mongodb ext RUN pecl install mongodb && docker-php-ext-enable mongodb # Install composer RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer # Add user for laravel application RUN groupadd -g 1000 www RUN useradd -u 1000 -ms /bin/bash -g www www # Copy supervisor configs COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf # Copy existing application directory contents COPY . /var/www # Copy existing application directory permissions COPY --chown=www:www . /var/www COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf CMD ["/usr/bin/supervisord"] # Change current user to www USER www # Expose port 9000 and start php-fpm server EXPOSE 9000 CMD ["php-fpm"]
and my docker-compose.yml file
version: '3' services: #PHP Service php: build: context: . dockerfile: Dockerfile image: digitalocean.com/php container_name: php restart: unless-stopped tty: true environment: SERVICE_NAME: php SERVICE_TAGS: dev working_dir: /var/www volumes: - ./:/var/www - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini - ./supervisord.conf:/etc/supervisor/conf.d/supervisord.conf networks: - app-network #NODEJS Service nodejs: image: node:10 container_name: nodejs restart: unless-stopped working_dir: /var/www volumes: - ./:/var/www tty: true networks: - app-network #Nginx Service nginx: image: nginx:alpine container_name: nginx restart: unless-stopped tty: true ports: - "80:80" - "443:443" volumes: - ./:/var/www - ./nginx/conf.d/:/etc/nginx/conf.d/ networks: - app-network #MySQL Service mysqldb: image: mysql:5.7.22 container_name: mysqldb restart: unless-stopped tty: true ports: - "3306:3306" environment: MYSQL_DATABASE: ${DB_DATABASE} MYSQL_USER: ${DB_USERNAME} MYSQL_PASSWORD: ${DB_PASSWORD} MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD} SERVICE_TAGS: dev SERVICE_NAME: mysql volumes: - dbdata:/var/lib/mysql - ./mysql/my.cnf:/etc/mysql/my.cnf networks: - app-network #MongoDB Service mongodb: image: mongo:3 container_name: mongodb restart: unless-stopped tty: true ports: - "27017:27017" networks: - app-network #Redis Service redis: image: redis container_name: redis restart: unless-stopped tty: true ports: - "${REDIS_PORT}:6379" networks: - app-network #Docker Networks networks: app-network: driver: bridge #Volumes volumes: dbdata: driver: local
you might also want to see my supervisord.conf
[supervisord] user=www nodaemon=true logfile=/dev/null logfile_maxbytes=0 pidfile=/var/run/supervisord.pid loglevel = INFO [unix_http_server] file=/var/run/supervisor.sock chmod=0700 username=www password=www [supervisorctl] serverurl=unix:///var/run/supervisord.sock username=www password=www [rpcinterface:supervisor] supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface [program:php-fpm] command = /usr/local/sbin/php-fpm autostart=true autorestart=true priority=5 stdout_logfile=/dev/stdout stdout_logfile_maxbytes=0 stderr_logfile=/dev/stderr stderr_logfile_maxbytes=0 [program:ohwo-worker] process_name=%(program_name)s_%(process_num)02d command=php /var/www/artisan horizon autostart=false autorestart=true user=www numprocs=1 redirect_stderr=true stdout_logfile=/var/www/laravel-worker.log
so from that setup. when the containers is UP it seems that supervisord is not working because if I run php artisan horizon
manually on my php container the queuing works perfectly. btw horizon is the tool i use for queuing.
and then I also try to run supervisorctl
on my php container and I got this error unix:///var/run/supervisord.sock no such file
so I’m just pretty new to docker just started few months ago. I do know how to configure supervisord on linux but i can’t make it work on docker.
so please pardon my stupidity 🙂
Advertisement
Answer
The idea here is to eliminate the supervisor and instead run whatever the supervisor used to run in several different containers. You can easily orchestrate this with docker-compose
, for example, all running the same container with different CMD
overrides, or the same container with a different CMD
layer at the end to split it out. The trouble here is the supervisor won’t be able to communicate the status of the processes it manages to Docker. It will always be “alive” even if all of its processes are completely trashed. Exposing those directly means you get to see they crashed.
What’s best is to break out each of these services into separate containers. Since there’s official pre-built ones for MySQL and so on there’s really no reason to build one yourself. What you want to do is translate that supervisord
config to docker-compose
format.
With separate containers you can do things like docker ps
to see if your services are running correctly, they’ll all be listed individually. If you need to upgrade one then you can do that easily, you just work with that one container, instead of having to pull down the whole thing.
The way you’re attacking it here is treating Docker like a fancy VM, which it really isn’t. What it is instead is a process manager, where these processes just so happen to have pre-built disk images and a security layer around them.
Compose your environment out of single-process containers and your life will be way easier both from a maintenance perspective, and a monitoring one.
If you can express this configuration as something docker-compose
can deal with then you’re one step closer to moving to a more sophisticated management layer like Kubernetes which might be the logical conclusion of this particular migration.