Skip to content
Advertisement

How can I do congestion control for a UDP protocol?

I have a custom UDP protocol with multiple senders/receivers designed to send large files around as fast as possible. It is client/server based.

How can I detect congestion on the LAN to slow the rate of UDP packets being sent?

EDIT: please, no comments on the use of UDP whether it’s suitable or not. This protocol uses UDP but reassembles packets into whole files when they arrive.

To rephrase the question: How do congestion control algorithms work and how is congestion detected?

Advertisement

Answer

This is assuming you have to use UDP (TCP would be preferred).

From within the application, the only indication of network congestion is the loss of IP packets. Depending on how you have your protocol, you may want to do something like number each datagram going out, and if a receiver sees that it is missing some (or getting them out of order), send a message (or multiple) to the sender to indicate that there was loss of IP packets and to slow down.

There is a protocol called RTP (Real-time Transport Protocol) that is used in real time streaming applications.

RTP runs over UDP and RTCP(Real-time Transport Control Protocol) working with RTP provides measures for QoS(Quality of Service) like packet loss, delay, jitter, etc to report back to the sender so it knows when to slow down or change codecs.

Not saying you can use RTP, but it may be helpful to look at to see how it works.

User contributions licensed under: CC BY-SA
10 People found this is helpful
Advertisement