Forcefully chunking the TCP sends to a fixed size and delaying the next send. This allows smaller outgoing packets, thus less queuing / latency, but requires the ingest to be able to ack those smaller segments in a reasonable time so that the next ones can go out at the right time. If your connection is high latency or unstable at all, low latency mode can trigger frame drop from data getting too backed up.
Google calculator lets you easily estimate maximum queuing delays at various speeds, eg https://www.google.com/search?q=120+kil ... bits%2Fsec (120kb being an example of a keyframe spike)
Some packet graphs from testing:
http://i.imgur.com/XEkANkV.png (normal left / low latency right)
http://i.imgur.com/4paH5xQ.png (experimenting with segment sizes to minimize spikes and get lowest latency)
Google calculator lets you easily estimate maximum queuing delays at various speeds, eg https://www.google.com/search?q=120+kil ... bits%2Fsec (120kb being an example of a keyframe spike)
Some packet graphs from testing:
http://i.imgur.com/XEkANkV.png (normal left / low latency right)
http://i.imgur.com/4paH5xQ.png (experimenting with segment sizes to minimize spikes and get lowest latency)