I’m imagining reading byte by byte would be very inefficient, but reading in bulks would almost always read more than needed, requiring to store the rest of the read input in a global context for all subsequent read operations to find. What am I missing?
Advertisement
Answer
The prototype is:
ssize_t getline(char **lineptr, size_t *n, FILE *stream);
So it’s clearly using FILE
, which is already buffered. So reading character-by-character is not inefficient at all.