Skip to content
Advertisement

Is it possible to share a Cuda context between applications?

I’d like to pass a Cuda context between two independent Linux processes (using POSIX message queues, which I already have set up).

Using cuCtxPopCurrent() and cuCtxPushCurrent(), I can get the context pointer, but this pointer is referenced in the memory of the process in which I call the function, and passing it between processes is meaningless.

I’m looking for other solutions. My ideas so far are:

  1. Try to deep copy the CUcontext struct, and then pass the copy.
  2. See if I can find a shared-memory solution where all my Cuda pointers are placed there so both processes can access them.
  3. Merge the processes into one program.
  4. It is possible that there is better context sharing in Cuda 4.0, which I could switch to.

I’m not sure option (1) is possible, nor if (2) is available or possible. (3) isn’t really an option if I want to make things generic (this is within a hijack shim). (4) I’ll look at Cuda 4.0, but I’m not sure if it will work there, either.

Thanks!

Advertisement

Answer

In a word, no. Contexts are implicitly tied to the thread and application that created them. There is no portability between separate applications. This pretty much the same with OpenGL and the various versions of Direct3D as well – sharing memory between applications isn’t supported.

CUDA 4 makes the API thread safe, so that a single host thread can hold more than 1 context (ie. more than 1 GPU) simultaneously and use the canonical device selection API to choose which GPU it is working with. That won’t help here, if I am understanding your question/application correctly.

User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement