The paced sender, often referred to as just the “pacer”, is a part of the WebRTC RTP stack used primarily to smooth the flow of packets sent onto the network.
Consider a video stream at 5Mbps and 30fps. This would in an ideal world result in each frame being ~21kB large and packetized into 18 RTP packets. While the average bitrate over say a one second sliding window would be a correct 5Mbps, on a shorter time scale it can be seen as a burst of 167Mbps every 33ms, each followed by a 32ms silent period. Further, it is quite common that video encoders overshoot the target frame size in case of sudden movement especially dealing with screensharing. Frames being 10x or even 100x larger than the ideal size is an all too real scenario. These packet bursts can cause several issues, such as congesting networks and causing buffer bloat or even packet loss. Most sessions have more than one media stream, e.g. a video and an audio track. If you put a frame on the wire in one go, and those packets take 100ms to reach the other side - that means you have now blocked any audio packets from reaching the remote end in time as well.
The paced sender solves this by having a buffer in which media is queued, and then using a leaky bycket algorithm to pace them onto the network. The buffer contains separate fifo streams for all media tracks so that e.g. audio can be prioritized over video - and equal prio streams can be sent in a round-robin fashion to avoid any one stream blocking others.
Since the pacer is in control of the bitrate sent on the wire, it is also used to generate padding in cases where a minimum send rate is required - and to generate packet trains if bitrate probing is used.
The typical path for media packets when using the paced sender looks something like this:
RTPSenderVideo
or RTPSenderAudio
packetizes media into RTP packets.PacingController::PacketSender()
callback method, normally implemented by the PacketRouter class.RTPSenderEgress
class makes final time stamping, potentially records it for retransmissions etc.Transport
interface, after which it is now out of scope.Asynchronously to this, the estimated available send bandwidth is determined - and the target send rate is set on the RtpPacketPacker
via the void SetPacingRates(DataRate pacing_rate, DataRate padding_rate)
method.
The pacer prioritized packets based on two criteria:
The enqueue order is enforced on a per stream (SSRC) basis. Given equal priority, the RoundRobinPacketQueue alternates between media streams to ensure no stream needlessly blocks others.
There are currently two implementations of the paced sender (although they share a large amount of logic via the PacingController
class). The legacy PacedSender uses a dedicated thread to poll the pacing controller at 5ms intervals, and has a lock to protect internal state. The newer TaskQueuePacedSender as the name implies uses a TaskQueue to both protect state and schedule packet processing, the latter is dynamic based on actual send rates and constraints. Avoid using the legacy PacedSender in new applications as we are planning to remove it.
An adjacent component called PacketRouter is used to route packets coming out of the pacer and into the correct RTP module. It has the following functions:
SendPacket
method looks up an RTP module with an SSRC corresponding to the packet for further routing to the network.At present the FEC is generated on a per SSRC basis, so is always returned from an RTP module after sending media. Hopefully one day we will support covering multiple streams with a single FlexFEC stream - and the packet router is the likely place for that FEC generator to live. It may even be used for FEC padding as an alternative to RTX.
The section outlines the classes and methods relevant to a few different use cases of the pacer.
For sending packets, use RtpPacketSender::EnqueuePackets(std::vector<std::unique_ptr<RtpPacketToSend>> packets)
The pacer takes a PacingController::PacketSender
as constructor argument, this callback is used when it's time to actually send packets.
To control the send rate, use void SetPacingRates(DataRate pacing_rate, DataRate padding_rate)
If the packet queue becomes empty and the send rate drops below padding_rate
, the pacer will request padding packets from the PacketRouter
.
In order to completely suspend/resume sending data (e.g. due to network availability), use the Pause()
and Resume()
methods.
The specified pacing rate may be overriden in some cases, e.g. due to extreme encoder overshoot. Use void SetQueueTimeLimit(TimeDelta limit)
to specify the longest time you want packets to spend waiting in the pacer queue (pausing excluded). The actual send rate may then be increased past the pacing_rate to try to make the average queue time less than that requested limit. The rationale for this is that if the send queue is say longer than three seconds, it's better to risk packet loss and then try to recover using a key-frame rather than cause severe delays.
If the bandwidth estimator supports bandwidth probing, it may request a cluster of packets to be sent at a specified rate in order to gauge if this causes increased delay/loss on the network. Use the void CreateProbeCluster(DataRate bitrate, int cluster_id)
method - packets sent via this PacketRouter
will be marked with the corresponding cluster_id in the attached PacedPacketInfo
struct.
If congestion window pushback is used, the state can be updated using SetCongestionWindow()
and UpdateOutstandingData()
.
A few more methods control how we pace:
SetAccountForAudioPackets()
determines if audio packets count into bandwidth consumed.SetIncludeOverhead()
determines if the entire RTP packet size counts into bandwidth used (otherwise just media payload).SetTransportOverhead()
sets an additional data size consumed per packet, representing e.g. UDP/IP headers.Several methods are used to gather statistics in pacer state:
OldestPacketWaitTime()
time since the oldest packet in the queue was added.QueueSizeData()
total bytes currently in the queue.FirstSentPacketTime()
absolute time the first packet was sent.ExpectedQueueTime()
total bytes in the queue divided by the send rate.