Skip to content
Advertisement

read() hangs on zombie process

I have a while loop that reads data from a child process using blocking I/O by redirecting stdout of the child process to the parent process. Normally, as soon as the child process exits, a blocking read() in this case will return since the pipe that is read from is closed by the child process.

Now I have a case where the read() call does not exit for a child process that finishes. The child process ends up in a zombie state, since the operating system is waiting for my code to reap it, but instead my code is blocking on the read() call.

The child process itself does not have any child processes running at the time of the hang, and I do not see any file descriptors listed when looking in /proc/<child process PID>/fd. The child process did however fork two daemon processes, whose purpose seems to be to monitor the child process (the child process is a proprietary application I do not have any control over, so it is hard to say for sure).

When run from a terminal, the child process I try to read() from exits automatically, and in turn the daemon processes it forked terminate as well.

Linux version is 4.19.2.

What could be the reason of read() not returning in this case?

Follow-up: How to avoid read() from hanging in the following situation?

Advertisement

Answer

The child process did however fork two daemon processes … What could be the reason of read() not returning in this case?

Forked processes still have the file descriptor open when the child terminates. Hence read call never returns 0.

Those daemon processes should close all file descriptors and open files for logging.

Advertisement