Question / Help 18 second delay? (1500 bitrate) Do source size matter or just Base Resolution? Does Downscale work?

GarbagePlay

New Member
I've got a few videos I load up as local and global sources. They are in either 720 or 1080. Does this have any effect on the amount of data being sent to twitch and the delay?

I have my broadcast bitrate set at 1500.

I set my base resolution to 1280x720 and I use resolution downscale @1.25x to make it 1024x576.

Is this properly broadcasting a smaller size and bandwidth stream, or do I need to downsize everything in obs manually?

Hopefully this makes sense. I'm a little tired and scatterebrained.
 

GarbagePlay

New Member
And to make sure I'm not being confusing, do the size of global source video files matter within the overall upload bandwidth.
 
It's always hard to tell where delay comes from, usually most of it is from twitch itself. If by 18 seconds you mean difference between your local machine and what you see in the twitch preview, that's actually a somewhat low delay.

As for scaling... I personally avoid scaling whenever possible, it wastes CPU and tends to produce worse quality. The only time I can see scaling is e.g. if you have a scene you usually local record with, but for streaming you want to downscale it so it's more convenient. If you use your scene just for streaming, I'd recommend creating the scene in 1024x576 so OBS doesn't have to downscale it at runtime. This also allows you to have any images you may use as a background in 1024x576 aswell, so you can include them at their native resolution.
 

GarbagePlay

New Member
Thanks, that second part especially was exactly the kind of info I was looking for. I think I'll go ahead and setup the whole scene in 960x540. Full size youtube letspays can wait until obs mp get's local source selection :) (So that I can spit out a high res version while streaming but keep my scene toggles)
 
Why 960x540 now? Not that it's wrong because it's still a true 16:9 resolution, but if you want to be 100% safe you also want your resultion to be divisible by 8. 1024x576 is divisible by 8 (1024 is 128*8, 576 is 72*8). The next smaller resultion divisible by 8 is 896x504, which happens to be what I stream in.
 

GarbagePlay

New Member
I was reading the 960x540 is good because it's exactly 1 quarter of a 1920x1080 frame. Not really sure on the math though. Why do I want my resolution divisible by 8? bc 8 bits in a byte?
 
It has to do with how x264 encoding splits the image into "macroblocks" of varying sizes, one of which is 8 by 8 pixels. Most of the time a non 8-divisible resolution will just have extra unused pixels in the last macroblocks, but some encoders do this badly, causing potential issues such as blurring for example. I remember older OBS versions producing very slightly blurred recording with non 8-divisible resolutions (not sure if that still applies).

960x540 is still a pretty proper resolution. Nice round number, exactly a quarter of 1080p, divisible by 4, and somewhat commonly used. I guess it'd be best to test if 540p causes any blur in your stream or something like that.

EDIT: I just did a test regarding the blur, and it seems resolutions divisible by 4 are fine aswell. So 540p should be perfectly okay. You can forget what I said about divisible by 8.
 
Last edited:

FerretBomb

Active Member
Definitely set your base resolution to your game or monitor resolution, and downscale only once with the downscale dropdown box in Settings.

Absolutely DO NOT set your base resolution lower than your in-game res, if you care about quality. The in-preview source squash method is a much lower quality downscale. Even worse is downscaling twice (both base resolution smaller than the game res, and then downscaling further from that). It's like copying a copy of a videocassette at that point; scaling errors are amplified.

The reason you use the in-settings downscale dropdown is twofold. One, because it's a post-composition dropdown so the elements essentially get antialiased with one another (in the preview squashdown method, they're scaled independently and look worse as a result). Two, it allows you to only need to keep one set of art assets, and still cast at different resolutions depending on the game you're playing; if it's super-low-motion (Hearthstone et al) you may actually be able to get away with 1080p on a drastically too-small bitrate.

As far as delay is concerned, it's a function of your dropped frames, and your viewers' packet loss. The more packets the viewer misses (whether from your end, or the viewer's connection to the server) the more chunks it will want to buffer as intelligent compensation, and so the longer the delay grows. I believe the shortest possible at-present is 9 seconds, due to a lot of the work Twitch has put into using smaller chunks for HLS and streamlining things.
 
The in-preview source squash method is a much lower quality downscale.

It is?

Also, you're not completely correct on in-settings being better: If you have a situation where you only need to downscale one of your sources, just squashing that one is definitely better. A good example is if you're using a background, as you can have your background image be completely un-scaled while only your game that you put ontop is being scaled. Scaling everything would cause your entire scene to look worse thanks to downscaling, while only downscaling the game via squashing allows everything else to stay in 1:1 resolution.

If you have only one source in your scene, or all sources in the scene need to be scaled by the same amount, I would agree that scaling the entire scene is probably equivalent or better, though.

EDIT: I did a very quick test, and it seems squash scaling simply uses Bilinear scaling, so it's equivalent if you were going to use that for your scene-wide scaling, while it's slightly worse than scene-wide scaling with a higher quality scaling setting. I'm still not sure which is better for performance, but I'm leaning towards squash-scaling for that.
 
Last edited:

FerretBomb

Active Member
It is. If you need an example, try squashing down a webcam, or a larger overlay asset or corner bug. They're the most visible example of the kind of artifacting and problems you can/will get using the squash method, even with a video source (video, game cap, window cap, etc) though it will be less pronounced in motion. (Which is why people pushing for 60fps at too-low bitrates think they look fine, until they pause the video.)

Yes and no. First, if you want just one source to look bad, then squashing that one source is fine. Otherwise, it looks better overall to maintain a consistent scaling level across the board; it's less jarring. Much better to have all of your assets running at the scale of the game, and do an overall downscale to take advantage of the full-frame scaling benefits I'd mentioned above.

That said, if you are running separate sets of art assets for each base resolution level and run native, that will provide the best quality. But if you can, you want to AVOID *ever* scaling anything down in the preview. Up is OK(ish), but down should be avoided as much as possible. Webcams again are generally the exception to this as they tend to be Global sources, so you can't set a different resolution for fullscreen-cam and in-game cam.

I've done comparatives before with just about every variation, when I was just getting started. I'll have to see about finding the screenshots and post them up for side-by-side comparison. Short version, don't squash in the preview, or you will lose significant quality as compared to the overall downscale.
 
I honestly think you're seeing more quality loss than there really is. From the quick comparison I did yesterday it would appear that squash scaling uses the same bilinear scaling as the scene-wide bilinear scaling, so the only downside is that you can't use bicubic or lanczos scaling methods. If you can prove otherwise, go ahead.

Yes and no. First, if you want just one source to look bad, then squashing that one source is fine. Otherwise, it looks better overall to maintain a consistent scaling level across the board; it's less jarring. Much better to have all of your assets running at the scale of the game, and do an overall downscale to take advantage of the full-frame scaling benefits I'd mentioned above.

Not true at all, although not entirely inaccurate for many common cases either. Having all of one's assets at the game's resolution just so you can do an overall downscale is a really bad general rule, especially when it requires upscaling of assets. Also, downscaling assets that don't need to be downscaled is definitely 100% always worse than not downscaling them at the cost of your game looking very slightly worse. (I guess we can both agree that not using any form of scaling at all is the obvious best option, if possible.)

It's also important to note that just scaling a 960x720 (as an example) game to 480x360 in order to fit it into a 360p 16:9 layout with something next to it is definitely going to be more performance efficient than doing the whole setup in in 720p and then downscaling all of it. In the first case, you are downscaling 960x720 to 480x360, and in the latter you are scaling a larger 1280x720 to 640x360.
 

FerretBomb

Active Member
Again, need to find the screenshots I'd taken for comparison, or just make some new ones.

You should not be upscaling assets, more was referencing upscaling a smaller game source (like a 240i capture from a retro console) to fit a template scene layout.
Downscaling the overall scene post-compositing is vastly preferable to downscaling individual separate elements manually in the Preview. Streaming at a native resolution works, but at that point you need to maintain assets at each resolution level you plan to use, as well. A full-frame post-compositing downscale from a single higher-resolution master asset isn't going to look as good as running native, of course.

Yes, but the in-preview downscale is going to look noticeably worse. Performance doesn't even begin to factor in; the performance difference between the downscale methods is so negligible as to be irrelevant. Downscaling multiple elements is going to have quite a bit more impact than simply downscaling the final post-composited frame (which the dropdown does). You also give up the in-scene effective antialiasing as OBS doesn't appear to properly maintain the alpha channel through scaling, when handling individual elements/sources. Just look at the edges of a chromakeyed webcam source, they get ridiculously jagged when you scale down, instead of holding the smooth edge they should when resized from a larger source, along with any patterns (horizontal pinstripes make this especially noticeable).
When the cam is run at native resolution and a full-frame post-compositing downscale is used, the rescaling issues are noticeably less severe. Admittedly, that's a bit of a torture-test, but it lampshades the problem which STILL applies to all sources squashed in the preview window.
 
Downscaling the overall scene post-compositing is vastly preferable to downscaling individual separate elements manually in the Preview.

If you need to downscale multiple elements by the same amount, I agree that a scene-wide downscale is probably better. The main case I'm talking about here is when you have only 1 asset you need to downscale, in which case I think having that one asset look very slightly worse (I still don't believe squash-scaling is significantly behind regular bilinear scaling, outside of chromakeying of course) is a small price to pay for a chance at having everything else looking juicy in native.

Performance doesn't even begin to factor in; the performance difference between the downscale methods is so negligible as to be irrelevant.

With a streaming-monster like your machine that may be true, but when you have to squeeze every last bit of performance out of a machine it can become relevant. Once again I'd have to redo my performance tests, maybe I'm remembering the performance difference between squashing and scene-scaling as bigger than it really is.

Anyways, if you can't find your screenshots I can do a more rigorous test of this including performance to find out how big the difference is. Maybe I should do that anyway, as screenshots are pretty much entirely irrelevant if they don't include a clear way to tell from what resolution to what resolution a source was scaled (obvious scaling unevenly is going to look badly, to no fault of the scaling method uses)
 
Top