I developed in on 64-bit MAC. And I wish it would work under two scenarios:
64 bit server and 64 bit client
32 bit server and 64 bit client
ONLY DEALING *NIX NOW
Between the communication of server A and client B, I would exchange a linked list of struct, which is of the following type:
typedef struct A{ unsigned long filed1; int filed2; char filed3; struct A* next; } A_t
both size of char
and int
are consistent between 32-bit and 64-bit machine.
My concern is more about the pointer and unsigned long.
My sending/receiving strategy is:
send and receive an array of list_size * sizeof(A_t)
Bytes, however since the sizeof(A_t) would have different values on 32-bit and 64-bit, my array would have some misalignment. And I wonder what is the most universal way to fix problems of this kind.
Advertisement
Answer
You should communicate in some universal way that is independent of machine (both word size and endianness). The easiest way is to send as ASCII, for example, as CSV, in the order of the list. The receiver can generate the list back from this.
In principle you should write an IRS, an Interface Requirements Specification, the contract between the two parties. It should deal with order of records, formats and (for example) what the minimum/maximum of numeric fields can be. It is good practice to write this down, how minimal it is and even if the contract is with yourself. Add it to the comment in the relevant functions on the server side and on the client side. Any programmer having to deal with it in the future now knows the constraints of the interface.