I have written simple server/client programs, in which the client sends some hardcoded data in small chunks to the server program, which is waiting for the data so that it can print it to the terminal. In the client, I’m calling send() in a loop while there is more data to send, and on the server, I’m doing the same with read(), that is, while the number of bytes returned is > 0, I continue to read.
This example works perfectly if I specifically call close() on the client’s socket after I’ve finished sending, but if I don’t, the server won’t actually exit the read() loop until I close the client and break the connection. On the server side, I’m using:
while((bytesRead = read(socket, buffer, BUFFER_SIZE)) > 0)
Shouldn’t bytesRead be 0 when all the data has been received? And if so, why will it not exit this loop until I close the socket? In my final application, it will be beneficial to keep the socket open between requests, but all of the sample code and information I can find calls close() immediately after sending data, which is not what I want.
What am I missing?
Advertisement
Answer
When the other end of the socket is connected to some other network system halfway around the world, the only way that the receiving socket knows “when all the data has been received” is precisely when the other side of the socket is closed. That’s what tells the other side of the socket that “all the data has been received”.
All that a socket knows about is that it’s connected to some other socket endpoint. That’s it. End of story. The socket has no special knowledge of the inner workings of the program that has the other side of the socket connection. Nor should it know. That happens to be the responsibility of the program that has the socket open, and not the socket itself.
If your program, on the receiving side, has knowledge — by the virtue of knowing what data it is expected to receive — that it has now received everything that it needs to receive, then it can close its end of the socket, and move on to the next task at hand.
You will have to incorporate in your program’s logic, a way to determine, in some form or fashion, that all the data has been transmitted. The exact nature of that is going to be up to you to define. Perhaps, before sending all the data on the socket, your sending program will send in advance, on the same socket, the number of bytes that will be in the data to follow. Then, your receiving program reads the number of bytes first, followed by the data itself, and then knows that it has received everything, and can move on.
That’s one simplistic approach. The exact details is up to you. Alternatively, you can also implement a timeout: set a timer and if any data is not received in some prescribed period of time, assume that there is no more.