Skip to content
Advertisement

What is a good Inter-Process Communication method between C and PHP in Linux

I actually don’t know whether I am asking a proper question. Let me describe my problem first.

End user <-1-> web server (by PHP) <-2-> an internal process (by C or C++) <-3-> an external hardware

The 1 should be something like ajax request. The 2 should be something like inter-process communication. The 3 should be uart RS232 communication.

The end user will request to change some settings on the hardware, then the request will propagate to the hardware. The hardware replies successful or failed, then the result will propagate back to the user. The hardware reply delay can be within 1 second.

So when the webserver receives ajax request from end user, it will hold and send an IPC request to the c/c++ program. The c/c++ program will send via UART and hold and wait for hardware to reply. For UART part, there’s asynchronous uart model, so the c/c++ program won’t need to continuously wait for the UART.

The webserver will wait until the c/c++ program returns (via IPC again), and then forward the result back to the end user.

Since the webserver has no memory, so there can’t be any asynchronous things (as far as I understand).

I can think of a simple way which is via file or database. The webserver continuously reading files or database for reply.

But I don’t really think this is a good way because it causes wasting of server CPU cycles.

If I can tolerate some delays, well, it depends, but I think several seconds of delay at user side is ok to them.

Can you suggest me some nice ways of IPC to achieve my purpose?

And, if you think there’s a better solution (than my description above) for the whole process or any specific link (including link 1, 2 & 3), please also share your 2cent.

Hope I asked my question clearly.

Thanks.

Advertisement

Answer

Possibly the simplest solution you can find is to use pipes. The processes would have an open pipe for reading “calls” and answering them in the same fashion.

One possible way of setting this up is to have a pair of named pipes (mkfifo) in a specific or variable location. Such pipes are known to both this process and PHP. The process would block reading a loop for requests/commands in some textual “protocol” and write back to PHP through the other pipe. In this way both PHP and the external processes could be stopped/killed and restarted and the communications path would still be stable.

You might need to do something else in order to verify whether the process is actually running, if this is needed, but a simple “ping” command over this “protocol” would be enough.

This assumes:

  • you have the possibility of making improvements to the processes that communicates to the hardware (otherwise, your are bound to whatever it already offers)
  • you don’t have high performance requirements (pipes are relatively slow)
  • there’s no parallelism problem in case of concurrent accesses from the PHP script to process (unless you do some locking, 2 concurrent requests would be written mixed in the pipe)

There are off course other ways of achieving this, but I find hard to consider other way that is as simple as this. Queueing (d-bus, others), as suggested in some comments, are just building on top of this idea but adding more complexity IMHO, therefore, if no functionality provided by these other services is needed, pipes should be enough.

User contributions licensed under: CC BY-SA
7 People found this is helpful
Advertisement