Skip to content
Advertisement

Can an already opened FILE handle reflect changes to the underlying file without re-opening it?

Assuming a plain text file, foo.txt, and two processes:

  • Process A, a shell script, overwrites the file in regular intervals
    $ echo "example" > foo.txt
  • Process B, a C program, reads from the file in regular intervals
    fopen("foo.txt", "r"); getline(buf, len, fp); fclose(fp);

In the C program, keeping the FILE* fp open after the initial fopen(), doing a rewind() and reading again does not seem to reflect the changes that have happened to the file in the meantime. Is the only way to see the updated contents by doing an fclose() and fopen() cycle, or is there a way to re-use the already opened FILE handle, yet reading the most recently written data?

For context, I’m simply trying to find the most efficient way of doing this.

Advertisement

Answer

On Unix/Linux, when you create a file with a name which already existed, the old file is not deleted or altered in any way. A new file is created and the directory is updated to point at the new file instead of the old one.

The old file will continue to exist as long as some directory entry points at it (Unix file systems allow the same file to be pointed to by multiple directories) or some program has an open file handle to the file, which is more relevant to your question.

As long as you don’t close fp, it continues to refer to the original file, even if that file is no longer referenced by the filesystem. When you close fp, the file will get garbage collected automatically, and the next time you open foo.txt, you’ll get a file descriptor for whatever file happens to have that name at that point in time.

In short, with the shell script you indicate, your C program must close and reopen the file in order to see the new contents.

Theoretically, it would be possible for the shell script to overwrite the same file without deleting it, but (a) that’s tricky to get right; (b) it’s prone to race conditions; and (c) closing and reopening the file is not that time-consuming. But if you did that, you would see the changes. [Note 1]

In particular, it’s common (and easy) to append to an existing file, and if you have a shell script which does that, you can keep the file descriptor open and see the changes. However, in that case you would normally have already read to the end of the file before the new data was appended, and the standard C library treats the feof() indicator as sticky; once it gets set, you will continue to get an EOF indication from new reads. If you suspect that some process will be writing more data to the file, you should reset the EOF indication with fseek(fp, 0, SEEK_CUR); before retrying the read.

Notes

  1. As @amadan points out in a comment, there are race conditions with echo text > foo.txt as well, although the window is a bit shorter. But you can definitely avoid race conditions by using the idiom echo text > temporary_file; mv -f temporary_file foo.txt, because the rename operation is atomic. Of course, that would definitely require you to close and reopen the file. But it’s a good idea, particularly if the contents being written are long or critical, or if new files are created frequently.
User contributions licensed under: CC BY-SA
8 People found this is helpful
Advertisement