Skip to content
Advertisement

Using unbuffered pipe as “dummy” file output

I’ve been dealing with a weird issue that I can’t find a way to solve.

My situation is as follows.

I have an application in python called “app1”, that requires a file for outputting the results of it’s execution.

I have a secondary application, called “app2”; a binary, that gets the input from stdin.

I want to pipe what “app1” is generating directly into “app2” for processing, what in an ideal situation would be like this:

app1 | app2

But, as I said, there are some restrictions, like the fact that app1 requires a file to be the output. The first solution I found for “fooling” app1 into outputting to stdout is to use mkfifo and create a pipe, so I can pipe it into stdin in app2. Like this:

pipe='/tmp/output_pipe'
mkfifo "$output_pipe"

python app1 -o "$output_pipe" &
app2 < $tmp_pipe

The problem is that eventually, during the execution, app1 will generate more output than what app2 can handle as an input, and due to the buffer size restrictions on the pipe, the pipe will fill up and everything will stop working.

Then I used this other approach:

python app1 -o /dev/stdout | app2

But the situation is the same as stdout has buffer size restrictions too.

Anyone has any idea on how can I solve this specific scenario?

TL;DR: I need a “dummy” file that will act as stdout but without the standard size restrictions of the pipes.

Advertisement

Answer

Well. My bad.

It was not a buffer problem, as some people suggested here.

It was a CPU cap problem. Both applications were consuming 100% for the CPU and RAM when running and that’s why the application crashed.

User contributions licensed under: CC BY-SA
7 People found this is helpful
Advertisement