DistroAV - Network Audio/Video in OBS-Studio using NDI® technology

DistroAV - Network Audio/Video in OBS-Studio using NDI® technology 6.1.1

BoostOrDie

New Member
NDI uses A LOT of GPU. Like each source you send take about 10-15% of GTX 1080. When rendering 3 simultanious streams uses it about 30-40%.

Is there a way to decrese the load NDI uses? Maybe lower bitrate? Cant see any settings of NDI
 

AaronD

Active Member
Is there a way to decrese the load NDI uses? Maybe lower bitrate? Cant see any settings of NDI
I believe that NDI is completely uncompressed. Not even lossless. (*) So it takes a TON of bandwidth on the network, but the dedicated professional installations that it was designed for are assumed to have that in mind and so it's okay. Thus, no settings. That comes from the standard itself, not a particular implementation of it.

That then brings the question of why it uses so much GPU, if it's not actually compressing? Well, there's still some translation to do, between the frame buffer and a series of network packets and then back again, which lends itself much better to a parallel process than a serial one. So that might be what the GPU is doing. It's not h.264, so as to use the dedicated chunk of silicon that's made specifically for that and is invisible to the usage meter, but it's not a 1:1 correlation either. So the general-purpose parallel processor gets to do it, and that does appear on the meter.

(*) Even if the *visible* data is uncompressed, I've noticed that a mostly transparent picture does use less bandwidth than a fully-opaque one, according to the network traffic meter that I have on my taskbar. So it appears that fully-transparent pixels are not transmitted, or maybe there *is* some lossless compression going on in general, which the GPU would also be responsible for.

But the end result still remains: no loss of quality *at all*, and the protocol "just works" so that those who use it can think about other things. Fewer settings makes it more likely to "just work", all the way to no settings at all, which makes only one configuration to verify before release.

---

Professional tools, like NDI, are different from consumer. In the pro world, you don't care about the amount of data you're throwing around locally, or how much processing power it takes to handle that. You use what is guaranteed to work, and you buy what is needed to support it.

(You're also not concerned with constant high loads. If the usage meter is 95% +/-3%, it's good! You can run a critical show on that. You only care that it's *always* less than 100%, which that is: max 98% < 100%. It's the *spikes* in load that make people nervous and want a lower average. If you don't have those, you can run a higher average and be perfectly okay.)

Unlike consumer tools, that are designed to be nice to the cheap hardware that someone already has.
 

Lawrence_SoCal

Active Member
@AaronD - just an FYI - there are various iterations of NDI, that use differing compression and bandwidth.
Check out https://videoguys.com/blogs/news-and-sales/understanding-ndi-versions-full-ndi-vs-ndi-hx-hx2-and-hx3 and https://birddog.tv/fullndi-vs-ndihx3/ for more details on MPEG-2 H.264 and move to H.265 in HX2 and HX3 which supports both H.264 and H.265

Now, what a given NDI camera supports can vary... And with Panasonic, the NDI output is controlled strictly by the camera settings

As to GPU usage and bitrate... beware your assumptions. You can lower network data bandwidth (bitrate) through higher compression, but that then takes computation to decompress (isn't necessarily lower video bitrate) ... For adequately speced hardware and network, why bother with compression... just causes extra work on either end. A WAN link being a good reason to use network bandwidth compression. WiFi is rarely, to never, appropriate for real-time video

One thing to check is your camera output... my older Panasonic NDI PTZ cameras default to 1080p60... and for my use case 60fps is a complete waste, dropping to 30fps means less processing. I had left 1080p60 just 'cuz' when we only had 1 camera, as soon as I added extra cameras, it seemed prudent to not overload system and not wastefully process extra frames for no purpose
 

AaronD

Active Member
Top