I’d like to pass a Cuda context between two independent Linux processes (using POSIX message queues, which I already have set up).
Using cuCtxPopCurrent()
and cuCtxPushCurrent()
, I can get the context pointer, but this pointer is referenced in the memory of the process in which I call the function, and passing it between processes is meaningless.
I’m looking for other solutions. My ideas so far are:
- Try to deep copy the
CUcontext
struct, and then pass the copy. - See if I can find a shared-memory solution where all my Cuda pointers are placed there so both processes can access them.
- Merge the processes into one program.
- It is possible that there is better context sharing in Cuda 4.0, which I could switch to.
I’m not sure option (1) is possible, nor if (2) is available or possible. (3) isn’t really an option if I want to make things generic (this is within a hijack shim). (4) I’ll look at Cuda 4.0, but I’m not sure if it will work there, either.
Thanks!
Advertisement
Answer
In a word, no. Contexts are implicitly tied to the thread and application that created them. There is no portability between separate applications. This pretty much the same with OpenGL and the various versions of Direct3D as well – sharing memory between applications isn’t supported.
CUDA 4 makes the API thread safe, so that a single host thread can hold more than 1 context (ie. more than 1 GPU) simultaneously and use the canonical device selection API to choose which GPU it is working with. That won’t help here, if I am understanding your question/application correctly.