Skip to content
Advertisement

PHP Fpm process is killing my website: process is blocked with status D

After days of searching in the web, Stack Overflow, Google,. Everywhere, I can not understand what happens to PHP-fpm after hours of working normally.

Description of the problem:

I have an Ubuntu 16.04 VPS where I have installed PHP-FPM and Nginx and a small redis-server to store sessions. I have 4 websites running under PHP-fpm. All websites are good, just one of them has this problem.

PHP-FPM communicate with Nginx using sockets.

After hours working properly, suddenly PHP-FPM processes did not work and have status D when I run htop command. Here is a screenshot of the output of the htop command:

demonstration

After searching in the internet, I got that status D means that process is waiting for resource.

I have added more memory for MySQL Server but nothing happens. MySQL server is fine when I execute commands from workbench or any other application.

Perhaps, it’s a memory problem?

I added memory for VPS and it now it runs with 6 GB of memory (most of memory is not used). PHP-FPM continue having status D after hours of running.

Perhaps it’s related to opened file descriptors?

I changed the number of opened files descriptor to 2097152 which is a very big number. I continue getting the same problem.

Perhaps, it’s a socket problem or Linux config problem?

I have increased the most Linux configuration parameters like this:

# Increase size of file handles and inode cache
fs.file-max = 2097152

# unix sockets accept by default 127 connections.
net.core.somaxconn = 4096

vm.swappiness = 0
vm.vfs_cache_pressure = 50

#Needed by redis
vm.overcommit_memory = 1

#
# 16MB per socket - which sounds like a lot, but will virtually never
# consume that much.
#
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# Increase the number of outstanding syn requests allowed.
# c.f. The use of syncookies.
net.ipv4.tcp_max_syn_backlog = 8192

But I continue having the same problem. This is what I get in nginx log:

2016/07/17 22:57:30 [alert] 1885#1885: *59394 open socket #156 left in connection 117
2016/07/17 22:57:30 [alert] 1885#1885: *59341 open socket #107 left in connection 118
2016/07/17 22:57:30 [alert] 1885#1885: *59385 open socket #148 left in connection 119
2016/07/17 22:57:30 [alert] 1885#1885: *59392 open socket #154 left in connection 121

I have tried most of recommanded solutions found in the web, but without success.

I have changed these parameters in PHP-fpm.conf.

emergency_restart_threshold = 30
emergency_restart_interval = 180
process_control_timeout = 30

Here is PHP-fpm config of the pool:

pm = ondemand
pm.max_children = 30
pm.process_idle_timeout = 10s;
pm.max_requests = 500

This is my nginx site config:

fastcgi_buffers 256 16k;
fastcgi_max_temp_file_size 0;

    location ~ ^/index.php(/|$) {
        fastcgi_pass unix:/var/run/php5-fpm-mysite.com.sock;
        fastcgi_split_path_info ^(.+.php)(/.*)$;
        include fastcgi_params;
        fastcgi_param  SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        fastcgi_param DOCUMENT_ROOT $realpath_root;
        internal;
    }

Nginx Global config:

worker_processes 2;
worker_rlimit_nofile 100000;

pid /run/nginx.pid;

events {
        worker_connections 1024;
        multi_accept on;
}

Last thing: Before 2 weeks, I was running Ubuntu 14.04 and I have upgraded my server to Ubuntu 16.04 and I have a lot of issues. But this one, I can not understand exactly the origin of this problem.

I’m using Ocache to cache code and I have increased all parameters to get more memory and website works fine and cache is never full.

I have already restarted the server a lot of times to apply configuration.

Disc: 50% full. I have a lot of space.

Note that when PHP-fpm process are blocked, I have restarted the whole service and few seconds later, I got the same problem. I did same thing for nginx and I got the same problem. The only way to make website work is to restart the whole system.

Please, any help is welcome!

Advertisement

Answer

After days of looking for a solution, the problem was not related to Linux inodes, not related to memory and not related to sockets…

It’s related to application code.

I use Symfony2 Framework, and for some reasons, i had changed the parameter “auto_generate_proxy_classes” to true. And i have pushed the code to production.

When auto_generate_proxy_classes is set to true, Doctrine will check all proxies classes and regenerate them each request. So when i got a lot of requests, php-fpm processes will regenerate theses classes in the same time. So process were blocked until other process finish code generation.

Solution:

instead of:

doctrine:
    dbal:
        ....
    orm:
        auto_generate_proxy_classes: true.

Put the default Symfony2 config:

doctrine:
    dbal:
        ....
    orm:
        auto_generate_proxy_classes: "%kernel.debug%"
User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement