Skip to content
Advertisement

How to use the attach the same console as output for a process and input for another process?

I am trying to use suckless ii irc client. I can listen to a channel by tail -f out file. However is it also possible for me to input into the same console by starting an echo or cat command?

If I background the process, it actually displays the output in this console but that doesn’t seem to be right way? Logically, I think I need to get the fd of the console (but how to do that) and then force the tail output to that fd and probably background it. And then use the present bash to start a cat > in.

Is it actually fine to do this or is that I am creating a lot of processes overhead for a simple task? In other words piping a lot of stuff is nice but it creates a lot of overhead which ideally has to be in a single process if you are going to repeat that task it a lot?

Advertisement

Answer

However is it also possible for me to input into the same console by starting an echo or cat command?

Simply NO! cat writes the current content. cat has no idea that the content will grow later. echo writes variables and results from the given command line. echo itself is not made for writing the content of files.

If I background the process, it actually displays the output in this console but that doesn’t seem to be right way?

If you do not redirect the output, the output goes to the console. That is the way it is designed 🙂

Logically, I think I need to get the fd of the console (but how to do that) and then force the tail output to that fd and probably background it.

As I understand that is the opposite direction. If you want to write to the stdin from the process, you simply can use a pipe for that. The ( useless ) example show that cat writes to the pipe and the next command will read from the pipe. You can extend to any other pipe read/write scenario. See link given below.

Example:

cat main.cpp | cat  /dev/stdin
cat main.cpp | tail -f

The last one will not exit, because it waits that the pipe gets more content which never happens.

Is it actually fine to do this or is that I am creating a lot of processes overhead for a simple task? In other words piping a lot of stuff is nice but it creates a lot of overhead which ideally has to be in a single process if you are going to repeat that task it a lot?

I have no idea how time critical your job is, but I believe that the overhead is quite low. Doing the same things in a self written prog must not be faster. If all is done in a single process and no access to the file system is required, it will be much faster. But if you also use system calls, e.g. file system access, it will not be much faster I believe. You always have to pay for the work you get.

For IO redirection please read: http://www.tldp.org/LDP/abs/html/io-redirection.html

If your scenario is more complex, you can think of named pipes instead of IO redirection. For that you can have a look at: http://www.linuxjournal.com/content/using-named-pipes-fifos-bash

User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement