I’m working with a command line utility that requires passing the name of a file to write output to, e.g.
foo -o output.txt
The only thing it writes to stdout
is a message that indicates that it ran successfully. I’d like to be able to pipe everything that is written to output.txt
to another command line utility. My motivation is that output.txt will end up being a 40 GB file that I don’t need to keep, and I’d rather pipe the streams than work on massive files in a stepwise manner.
Is there any way in this scenario to pipe the real output (i.e. output.txt
) to another command? Can I somehow magically pass stdout
as the file argument?
Advertisement
Answer
Solution 1: Using process substitution
The most convenient way of doing this is by using process substitution. In bash the syntax looks as follows:
foo -o >(other_command)
(Note that this is a bashism. There’s similar solutions for other shells, but bottom line is that it’s not portable.)
Solution 2: Using named pipes explicitly
You can do the above explicitly / manually as follows:
Create a named pipe using the
mkfifo
command.mkfifo my_buf
Launch your other command with that file as input
other_command < my_buf
Execute
foo
and let it write it’s output tomy_buf
foo -o my_buf
Solution 3: Using /dev/stdout
You can also use the device file /dev/stdout
as follows
foo -o /dev/stdout | other_command