Beam

Beam v1.1.0

I did a bit of research into why my LACP links aren't running any faster than 1Gbps.

As it turns out, any one connection between two LACP systems is based on one of the interfaces in each bundle. Therefore a single session won't go faster than 1Gbps, the base bandwidth for one link in the bundle. Basically, it doesn't work the same way as multilink PPP does, if I'm understanding things correctly.

What it does do is guarantee that any other sessions up to the number of links in the bundle can run up to the speed of any one of those links. Case in point, I run iperf on two machines, each as a server, and on one machine as a client. On the one I run as a client, I run two different sessions, one to each server. Each client session ran at nearly 1Gbps, which is because the client has an LACP session with my Cisco. The other two HAPPEN to have LACP sessions to that same switch, but will only use 1Gbps at a time for any one client up to the full amount of 4Gbps.

The moral of the story isn't so much that LACP will allow a session to hit 4Gbps. It WILL allow as many 1Gbps sessions to run as long as the bundle has available bandwidth on each link. The concept is still useful, especially if you want to run more NDI/whatever streams that can comfortably fit on a single link, notwithstanding any system overhead for encoding/decoding for a moment.

The best way to move more than 1Gbps on my setup is to get adapters, media and infrastructure that can handle >1Gbps per link.

--Katt. =^.^=
 
Last edited:

YorVeX

Member
I did a bit of research into why my LACP links aren't running any faster than 1Gbps.

As it turns out, any one connection between two LACP systems is based on one of the interfaces in each bundle. Therefore a single session won't go faster than 1Gbps, the base bandwidth for one link in the bundle. Basically, it doesn't work the same way as multilink PPP does, if I'm understanding things correctly.

What it does do is guarantee that any other sessions up to the number of links in the bundle can run up to the speed of any one of those links. Case in point, I run iperf on two machines, each as a server, and on one machine as a client. On the one I run as a client, I run two different sessions, one to each server. Each client session ran at nearly 1Gbps, which is because the client has an LACP session with my Cisco. The other two HAPPEN to have LACP sessions to that same switch, but will only use 1Gbps at a time for any one client up to the full amount of 4Gbps.

The moral of the story isn't so much that LACP will allow a session to hit 4Gbps. It WILL allow as many 1Gbps sessions to run as long as the bundle has available bandwidth on each link. The concept is still useful, especially if you want to run more NDI/whatever streams that can comfortably fit on a single link, notwithstanding any system overhead for encoding/decoding for a moment.

The best way to move more than 1Gbps on my setup is to get adapters, media and infrastructure that can handle >1Gbps per link.

--Katt. =^.^=
Wow, really bad luck. There are probably use cases where this way of distributing bandwidth would work, but not yours, because this is only one connection.

Most people don't know this, NDI actually has an automatic bandwidth distribution: if it detects more than one network interface on the system, it will distribute it's network usage among those if necessary. But not even that would help you here, because I assume the LACP network device appears as one device only in the system so even NDI will only use one connection and therefore not utilize this properly.

I mean, you could probably disable LACP and treat them as multiple interfaces, but NDI won't need more than 1 Gbps anyway (unless you want to transmit 8K 120 FPS or so) because it always has compression enabled.

Beam could of course be extended to allow for a feature to actually use multiple network connections (a configurable amount) and stitch the data together again on the receiving end. But seriously, I can't think of any feature right now that I would have less desire to implement. It would be quite a lot of work, increase code complexity for now and its whole future lifecycle and all of this only for a really small amount of people who could actually make use of it. And don't even get me started on troubleshooting when people have issues to get this working.

So I'm afraid I can't be bothered to implement this, at least not right now when there are more important things on the to-do list.
 
Wow, really bad luck. There are probably use cases where this way of distributing bandwidth would work, but not yours, because this is only one connection.

Most people don't know this, NDI actually has an automatic bandwidth distribution: if it detects more than one network interface on the system, it will distribute it's network usage among those if necessary. But not even that would help you here, because I assume the LACP network device appears as one device only in the system so even NDI will only use one connection and therefore not utilize this properly.

I mean, you could probably disable LACP and treat them as multiple interfaces, but NDI won't need more than 1 Gbps anyway (unless you want to transmit 8K 120 FPS or so) because it always has compression enabled.

Beam could of course be extended to allow for a feature to actually use multiple network connections (a configurable amount) and stitch the data together again on the receiving end. But seriously, I can't think of any feature right now that I would have less desire to implement. It would be quite a lot of work, increase code complexity for now and its whole future lifecycle and all of this only for a really small amount of people who could actually make use of it. And don't even get me started on troubleshooting when people have issues to get this working.

So I'm afraid I can't be bothered to implement this, at least not right now when there are more important things on the to-do list.

A more complete explanation about this can be found here:


Basically, it does have its uses, but not in the ways you might think, particularly not in the way multilink PPP is.

The last two pages of this IEEE 802.3 group slideshow pretty much say why they won't do much to LACP much like your unwillingness to make it work across multiple links:


Basically, if you want something faster, get something that's actually faster, like 10Gbps gear. As the last slide says, "LAG is good, but it's not as good as a fatter pipe".

All that said, I'm leaving my LACP setup installed so I can extend the capacity of those machines. GRANTED, it won't allow for >1Gbps sessions, it'll allow more of those 200Mbps ones to start and continue.

--Katt. =^.^=
 

FunkyJamma

New Member
Crackling audio is usually coming from sample rate mismatches, can you upload your latest log to the analyzer and check whether it shows any errors regarding the audio configuration? From OBS main menu select Help -> Log Files -> Upload current log file and then click the Analyze button.

Beam on the sender machine doesn't do anything with audio except telling OBS "give me the raw audio data please" as soon as you enable the Beam output. When activating the output triggers this it means something with your audio setup is not right. The usual check list would be to make all the sample rates match, disable any extra effects (Windows calls them "Enhancements" in the audio settings I believe), try with or without exclusive mode and update to latest audio drivers (or if already on the latest driver version sometimes the opposite can also help, moving back to a bit older drivers, many vendors offer at least the previous driver version on the website). Also disable any audio devices you don't need, e.g. disable any on-board sound chips if you're using a dedicated sound card and disable HDMI audio devices from your monitors or audio from webcams if you don't need them.

Audio setup is so tricky that it even has a support channel of its own on the OBS Discord (and BTW going there and ask for help might also be an option, but since Beam is beta software you should then probably verify first whether it also happens when you enable any other outputs, like those from NDI or Teleport):
View attachment 94720
This was the first thing I checked I've had mismatched audio before and I had completely fixed it months ago. So its not an issue. (I checked again to make sure). So I'm not sure what could be causing the issue. It doesn't happen when streaming directly from the machine. So I will be doing some more testing.
 

YorVeX

Member
QOIR has both better compression rate and lower CPU usage than QOI. So whoever is currently using QOI should switch to QOIR. However, it depends on an external library that is currently only available for Windows (and already included in the Windows download package), mostly because I am too much of a Linux newbie to come up with a working build script for Linux. So I'd be very happy if someone more knowledgeable in this area could jump in and help.

Same story for PNG through the FPNGE encoder.

After some first tests PNG doesn't seem like a very good option, at least for my test video loops it performs a bit worse than QOIR in terms of both CPU usage and compression rate. I am keeping it around for now, because I have yet to figure out whether we can make something of the fact that unlike QOIR it supports 16 bit HDR. Since that is Rec. 2020 and OBS only supports Rec. 2100 I am not sure how and whether this can be useful here.

My main focus for new formats is always keeping the CPU usage low and staying lossless, because for low bandwidth usage (at also low CPU usage) I don't think anything can beat the current JPEG implementation. Following that idea there is 2 more formats left that I want to add and try. One is Density, which is supposed to be better than LZ4 in all areas. If that turns out to be true we'd get another option for a compression that would work with any color format including HDR and have a really low CPU usage in general while staying lossless, but on the downside also with a still quite high bandwidth demand.

Another candidate might be ImageZero. It only supports RGB (without alpha channel) and is said not to get very good compression rates on computer generated images (as in gaming content), but on the bright side there is a chance that it becomes the new CPU usage champ for image specific compression algorithms. We'll see. All of this subject to my limited C++ skills being sufficient to actually get this implemented in my C# project here.

After I'm done with my ADD ALL THE COMPRESSION ALGORITHMS rampage I will see whether I can set up some sort of test lab for all of them to get things like CPU usage, compression rate (i.e. bandwidth demand) and compatibility (HDR support? color conversion necessary?) in a table and make an informed decision on which formats will be kept for the stable release and which will be removed again. Or maybe I keep them all and just let the user decide, but the options are already quite bloated, it's not exactly user friendly as it is.
 
Last edited:

YorVeX

Member
So, this is solely on my dev laptop with my test loop at 1080p60, my very personal setup specific (you could say "subjective") and not at all scientific or in any way robust test. Results can and will look entirely different for different CPU brands and models, platform setups in general (like RAM speed, mainboard, cooling, energy saving settings...), OBS version and settings, especially resolution and FPS and of course the content you push through it. And I don't mean only in absolute numbers, I also mean even the comparison between compression algorithms could differ. Also a Task Manager view is not really a good metric.

But it still gives a rough idea so I thought I'd still share it here with that big disclaimer above. My focus was solely on CPU usage, I didn't look at bandwidth yet (though you can assume compression ratio improves with higher CPU usage algorithms), I plan to do more thorough tests at a later point that will include this. Also both OBS instances were running on the same machine, no network between them. Here it comes, ordered by lowest to highest CPU usage from top to bottom (NDI and Density "Cheetah" on the same line because of similar CPU load):
The test was done with the default NV12 color format, therefore I didn't test the lossless image compression formats (JPEG lossless, PNG, QOI and QOIR) because they need a conversion to BGRA color format first, which has such a huge performance hit that my dev laptop can't even run them without frame drops. That's because the BGRA conversion already has a performance penalty of its own within OBS before any compression is even applied and then the actual compression has to process more data (= bytes per pixel) compared to NV12 (32 bit instead of 12 bit).

I am very happy about my latest addition, Density, that isn't even in a released Beam version yet. The "Cheetah" algorithm has the same CPU load as NDI, so it's an option for people who think the NDI CPU usage is just fine but want lossless (remember that also means HDR support!) and have the bandwidth available to trade for it - my test feed peaks at 870 mbps for this algorithm when raw would be 1493 mbps. The "Chameleon" algorithm is an option for people that want to reduce CPU load compared to NDI and have even more bandwidth available to trade for it, but not so much that they can send a raw feed - my test feed peaks at 989 mbps for this algorithm.

Shoutout to the almighty @Tuna who made me aware of it, 24 hours ago I didn't even know it existed.
 
Last edited:

danielsan2022

New Member
This plugin works great, except during the stream it eventually freezes.

On the receiving PC (Obs Studio receiving I mean) I have to open the source properties up then click OK, then the stream refreshes.

Think you could build in a timer option to refresh the stream so I don't have to click OK to bring it back every time?

It dies in like 5 minutes sometimes, other times it lasts for over half an hour.

Streaming over Wifi 6, I have Wifi 6E but my devices aren't using it right now.
 

YorVeX

Member
This plugin works great, except during the stream it eventually freezes.

On the receiving PC (Obs Studio receiving I mean) I have to open the source properties up then click OK, then the stream refreshes.

Think you could build in a timer option to refresh the stream so I don't have to click OK to bring it back every time?

It dies in like 5 minutes sometimes, other times it lasts for over half an hour.

Streaming over Wifi 6, I have Wifi 6E but my devices aren't using it right now.
Don't know what to think of this, WiFi (even WiFi 6) just is not suitable for this kind of high bandwidth low latency transmission that also requires a high stability. That doesn't mean it's impossible to get a stable setup with WiFi, but to achieve that is more luck than anything else. I feel a bit like a car vendor that is approached by a customer saying "hey I put tires made of paper on this and somehow it's not performing well on the highway, what can I do?". I'd say don't user paper tires (WiFi) before trying to diagnose any other issues.

That said, for sure I can try whether I see something if you upload logs of a session where it fails from each sender and receiver and post it here, ideally from a case where you start both, get the error within 5 or 10 minutes, reconnect, then after another minute close OBS and upload the logs.

1687560474964.png


Something I'd also suggest if you have the screen space would be to use "View Current Log" in that same menu and keep it open while streaming, then you can observe live what is happening there when the issue occurs.

Beam already has the refreshing mechanism you suggest: if the sender didn't receive a confirmation packet from the receiver for longer than a second it forces a reconnect. Likewise if the sender has to drop too many frames because it couldn't send them fast enough through the network. My assumption would be that sometimes you just have short disconnects (as is typical for WiFi) and then the part I need to diagnose is why these mechanism are not triggered properly for you.

By just hiding the Beam source on the receiver side (=click the eye icon next to it) and then showing it again you should achieve the same reconnect result as opening its properties window. Let me know if it doesn't, that would also be helpful information for my analysis.

Also keep an eye on GH issue #12, my hope is that after this is implemented the frame buffer you can configure on the receiver side should be able to compensate short network outages, so you could trade a bit of added latency (e.g. 500-1000 ms) for a feed that is stable even on WiFi, no guarantees there though.
 

Acey05

Member
Hello,

Other then using Task Manager, is there a way to check the Bandwidth Rate? Like in OBS or something, or is it literally keeping an eye out for any spikes within Task Manager? Cheers in advance.
 

YorVeX

Member
Hello,

Other then using Task Manager, is there a way to check the Bandwidth Rate? Like in OBS or something, or is it literally keeping an eye out for any spikes within Task Manager? Cheers in advance.
Checking for spikes in Task Manager should do I think. But there is also a way using OBS if you prefer.
  1. Run the OBS sender instance (where the output is active) with the --verbose argument
  2. On that OBS sender instance in the main menu click on Help -> Log Files -> View Current Log (as visible in the screenshot from my previous post) to get a new window that shows the live log of OBS
  3. Because of --verbose the log will now spam a lot, but roughly every second you will get a message from beam stating the net bandwidth usage:
1687570604319.png


Note that unlike in the Task Manager view this is the net bandwidth used without the protocol overheads that will come on top in your actual network.
Verbose logging can cost performance, so better don't leave it enabled for production use.
 
Last edited:

danielsan2022

New Member
Don't know what to think of this, WiFi (even WiFi 6) just is not suitable for this kind of high bandwidth low latency transmission that also requires a high stability. That doesn't mean it's impossible to get a stable setup with WiFi, but to achieve that is more luck than anything else. I feel a bit like a car vendor that is approached by a customer saying "hey I put tires made of paper on this and somehow it's not performing well on the highway, what can I do?". I'd say don't user paper tires (WiFi) before trying to diagnose any other issues.

That said, for sure I can try whether I see something if you upload logs of a session where it fails from each sender and receiver and post it here, ideally from a case where you start both, get the error within 5 or 10 minutes, reconnect, then after another minute close OBS and upload the logs.

View attachment 95294

Something I'd also suggest if you have the screen space would be to use "View Current Log" in that same menu and keep it open while streaming, then you can observe live what is happening there when the issue occurs.

Beam already has the refreshing mechanism you suggest: if the sender didn't receive a confirmation packet from the receiver for longer than a second it forces a reconnect. Likewise if the sender has to drop too many frames because it couldn't send them fast enough through the network. My assumption would be that sometimes you just have short disconnects (as is typical for WiFi) and then the part I need to diagnose is why these mechanism are not triggered properly for you.

By just hiding the Beam source on the receiver side (=click the eye icon next to it) and then showing it again you should achieve the same reconnect result as opening its properties window. Let me know if it doesn't, that would also be helpful information for my analysis.

Also keep an eye on GH issue #12, my hope is that after this is implemented the frame buffer you can configure on the receiver side should be able to compensate short network outages, so you could trade a bit of added latency (e.g. 500-1000 ms) for a feed that is stable even on WiFi, no guarantees there though.
Ok thanks. You just convinced me I need a switch at my desks here. I have three computers all on WiFi talking to each other, using file shares etc. without issue, but when it comes to streaming, you're right, the WiFi just isn't strong enough. If I could tap into WiFi 6E...my new motherboard can do it...but my other computers suck too much for that. Switch it is. Thanks again.
 

danielsan2022

New Member
Update: Bought a gigabit switch, some cat 6 cables, all my computers have gigabit ethernet cards.

Wired it up, set static IPs on all the computers' wired network interface cards, made custom rules for Windows firewall to allow all traffic for those wired cards (just so there's less firewall interference), changed options in Beam on the sending laptop w/ webcam to use the wired card, changed options in Beam source on the receiving laptop to use the wired card.

Restarted Obs studio on both computers.

Works like a charm! No frame skipping, no freezing.

Thank you.
 

Acey05

Member
Hey, it's me again, sorry have a few questions.

Does Beam automatically pick up between different PC's on a Network or do I need to setup something specific. I couldn't get them to work by Name, instead I had to manually setup the TCP Ethernet connection (basically, it's not like NDI or Teleport where it's Plug-and-Play), is this right or did I do something wrong?

Does the Frame Buffer need to 1,000 per 60 Frame for the Desync issues, or is that simply the advice? Like is there a diminishing return if I go to say 2,000, or even 3,000 other than a delay?

The JPEG Compression has the opposite effect of the "help pop out", when I decrease the Quality, so does the CPU and Bandwidth Usage vs. one or the other increasing, is this intended, or am I misunderstanding the wording on it?

This one is kinda complicated, so bear with me.

So I did a heavy stress test on my main PC (full on Rave Action + VHS + Static + Chroma video).
When keeping an eye out on the Network Tab under Task Manager, even if I use the most powerful compression to bandwidth ratio setup (CPU usage above 20%) I get around 800Mbps, if I use the lighter options, it's basically hitting my PC Port Cap (standard motherboard 1GB), so around 900Mbps.
I don't really care about my main PC network being high, but my router is also a simple standard Router where other devices are connected to.

So I guess my question is, would this technically flood my Bandwidth through the Router, making the Streams, to say Youtube at high upload rates, very sluggish or borderline unusable, or would that not matter?

I'm guessing if I'm hitting the Port Caps via my Router, I might as well shelf Beam for now until I get a proper system 2.5/5 Router, or proper Motherboards where I simply connect those 2 PC's together.

Sorry for the long questions, cheers in advance.
 

YorVeX

Member
Does Beam automatically pick up between different PC's on a Network or do I need to setup something specific. I couldn't get them to work by Name, instead I had to manually setup the TCP Ethernet connection (basically, it's not like NDI or Teleport where it's Plug-and-Play), is this right or did I do something wrong?
Peer Discovery is not yet in Beam, but I have actually started to work on it just now. For the current version, yes you have to still fiddle with the Ethernet stuff manually, so you did it all right.
Does the Frame Buffer need to 1,000 per 60 Frame for the Desync issues, or is that simply the advice? Like is there a diminishing return if I go to say 2,000, or even 3,000 other than a delay?
It depends on the highest lag that you might get. If that will never be longer than 1,000 ms, then setting the buffer to 2,000 won't change anything. Regardless of that, the frame buffer in its current state is also only half-useful, there are many lag scenarios it doesn't cover yet, that will come later.
The JPEG Compression has the opposite effect of the "help pop out", when I decrease the Quality, so does the CPU and Bandwidth Usage vs. one or the other increasing, is this intended, or am I misunderstanding the wording on it?
Now that you mention it, the tooltip text doesn't say anything about CPU usage, it only uses the rather vague terms "less compression" and "more compression". I will have to improve the wording on it, because tooltips for other compression algorithms do and it's not consistent as it is. What you should expect is indeed that a higher quality setting increases both bandwidth and CPU usage, and vice-versa.
So I did a heavy stress test on my main PC (full on Rave Action + VHS + Static + Chroma video).
When keeping an eye out on the Network Tab under Task Manager, even if I use the most powerful compression to bandwidth ratio setup (CPU usage above 20%) I get around 800Mbps, if I use the lighter options, it's basically hitting my PC Port Cap (standard motherboard 1GB), so around 900Mbps.
I don't really care about my main PC network being high, but my router is also a simple standard Router where other devices are connected to.

So I guess my question is, would this technically flood my Bandwidth through the Router, making the Streams, to say Youtube at high upload rates, very sluggish or borderline unusable, or would that not matter?

I'm guessing if I'm hitting the Port Caps via my Router, I might as well shelf Beam for now until I get a proper system 2.5/5 Router, or proper Motherboards where I simply connect those 2 PC's together.

Sorry for the long questions, cheers in advance.
Are we still talking about JPEG compression? 800-900 mbps seems very high, is that 4K60? Can't read that from your "most powerful compression to bandwidth ratio setup" statement, because in general it is true that you trade bandwidth with CPU usage, but Beam offers both lossy and lossless compressions and the lossless algorithms will use a lot more bandwidth than lossy, so at this point we're leaving the area where you could do a linear comparison.

E.g. from the post I made above you can see that the Density "Cheetah" algorithm (that's the default strength 2 setting for Density in Beam) will have roughly the same CPU usage as NDI, but the bandwidth demand will be magnitudes higher, because it is lossless:
1687658589893.png


If you are OK with lossy (it's still almost visually lossless) and on a quite limited bandwidth (like 1G) then the best option will always be JPEG. If the bandwidth isn't even enough for that, then there is no way around either upgrading your network or lowering your quality settings (resolution, FPS or the JPEG quality).
When you have more bandwidth you can trade that for lower CPU usage (up to the lowest possible usage by not compressing at all), this is what initially made me create Beam, because I had problems with NDI and no other option existed for that.

Anyway, I would also think going up to 900 (and even 800 is a bit of a stretch already) would leave too little room for other things on your network, unless you have a really expensive router with a very smart and/or well-configured prioritization technology I would assume that other things suffer too much, e.g. killing latency to the point of really laggy and unplayable online games and of course an unstable live stream.
 

danielsan2022

New Member
Ahh, great post, the recent one just before this one.

JPEG compression it is. Using other kinds of compression, all lossless I admit, resulted in a network failure and a need for a reboot of the offending laptop.

JPEG without the lossless option checked is both fast/low bandwidth and looks good!
 

Acey05

Member
Are we still talking about JPEG compression? 800-900 mbps seems very high, is that 4K60? Can't read that from your "most powerful compression to bandwidth ratio setup" statement, because in general it is true that you trade bandwidth with CPU usage, but Beam offers both lossy and lossless compressions and the lossless algorithms will use a lot more bandwidth than lossy, so at this point we're leaving the area where you could do a linear comparison.

Sorry, should have clarified, I was testing out all the compression's possible and settings, I don't remember which one was it (I think it was either QOI/LZ4 or Density with all the Compression settings cranked up to reduce the bandwidth usage), and this was at 1080/60. I did also try RGBA but that may have been too much for my performance, so I had to give up on QOI/PNG options.

Again, to be fair, I was trying to push the system to the limit with the video I was testing with (I'm talking about worst case scenario, which is a pixelated mess at even the highest quality encoded video at 6K Bitrate), since those are what kill things on a not so great older CPU's (i-7700K).

So it was expected to things to get pretty high, I just wanted to make sure that at this point that if that's the case on standard 1GBps connections, I might as well keep on NDI or try Beam's JPEG Compression and mess around on that.

Anyways, cheers for the answer!
 

YorVeX

Member
JPEG has a bit higher CPU usage than NDI, but at least on my system it's not that noticeable (NDI / Beam JPEG Quality 90) and on the other hand it has a significantly lower bandwidth demand without sacrificing visual quality (in fact, I have read several comments where people say that the JPEG quality was better). That might or might not be different for other CPU models and generations depending on how well they work specifically for the optimizations that each NDI and JPEG are using and how they are utilizing the cores (e.g. making use of multiple cores vs. depending on high single core performance).

I think in total JPEG is a bit more efficient about its compression. But here's the thing: if your 1G connection is enough for NDI, why not just take that small CPU usage improvement it comes with?

For me personally I know why, because I had some serious A/V sync problems with it that I've never been able to fix. But my setup with 6 OBS instances is quite special, if NDI works for you, then use that. I'd say for 1G connections NDI is actually in a good place regarding the CPU vs bandwidth demand tradeoff.

JPEG lossy becomes more interesting when you are using higher resolutions/FPS or have a lot of other stuff going on in your network so that the NDI bandwidth demand is too high.

And the other options are for people who want the highest possible production quality, stay lossless in their production chain as long as possible or want 10-bit HDR (which Beam supports with raw, LZ4 or Density compression), and also have the rather expensive setup that can handle that (e.g. 10G network and/or a lot of CPU power).
 
Top