Question / Help How to determine the amount of bitrates for a game

frozywuzy

New Member
Hello,

I am curious if it is possible to determine the amount of bitrates used for a game.
Is it related to the resolution? Ingame graphic settings? Both?
Is it possible to come up with a good assumption related to the amount of bitrates used ingame?
Is there a program that can tell me such information?
Is there a certain amount of bitrates I should not go higher?
The reason why I am asking is because I am looking to record in either 1080p or 1440p whith no resolution downscale.

I have a gaming and streaming PC setup.

Gaming PC specs :

CPU : i7 8700k OC 5ghz
GPU : GTX 1080 OC
Water CPU Cooling : h100i
Memory : 32gbs DDR4 3200mhz

Streaming PC specs :

CPU : i7 4770k OC 4.2ghz
GPU : GTX 1060 6gbs OC
Water CPU Cooling : h100i
Memory : 16gbs DDR4 3200mhz
Capture card : Elgato 4k60 Pro

Thank you.
 

carlmmii

Active Member
Set a CRF or CQP value for the quality you want. That's all you need to do.

As for the why...

The bitrate required to compress the source image down to the compressed bits is entirely dependent on the complexity of the scene as it is to the compression algorithm, as well as the actual quantity of image area that's needing to be compressed per time (i.e. resolution and framerate). The in game graphics settings will normally have very little to do with this (the exception being blurry vs sharp textures).

The resolution and framerate are fairly easy to understand -- lower resolution, less bitrate needed for the same quality level... same with framerate, because fewer frames needing to be recorded, so less space needed to be taken up given the same amount of time for the final recording.

The complexity for the compression algorithm is the "black box" of the equation though. There are 3 things to keep in mind that will help though...

The overall "detail level" of the image that's being compressed. If you're familiar with jpeg's, and how some jpegs require a lot bigger size to keep their quailty, it's the exact same deal. Sharp contrast is harder to compress because it takes more data to retain fine details. Fuzzy/blurry stuff is much easier, because it can be approximated without losing too much detail. This is a gross oversimplification, but that's the general concept.

The rate of image change. This is the big one for video compression, and is the reason it's even possible to keep bitrate down to manageable levels in the first place. Video compression heavily relies on being able to compress the difference between frames, so the more similar two images are next to each other, the less data is needed to keep the quality to a necessary level. The result is that "slower" scenes are able to be compressed better, while fast moving scenes require a lot more data in order to retain the same quality.

The quality of the encoding algorithm. All things being equal above, the quality of the compression algorithm is the last remaining factor when it comes to how much quality is retained given a certain amount of data to work with (or if we're talking about CRF/CQP, how little data will be needed to retain the necessary quality level). When you're talking about something like x264, there are multiple quality presets that can be chosen which will enable certain features (which usually come at the cost of processing power). There's also hardware encoding (specifically NVenc... not familiar with AMF), where the entire algorithm is mostly set by the actual transistors that are feeding the data straight through the encoding pipe, with very little that can actually be done for quality improvements (with the exception of some actual algorithm changes which can leverage CUDA processing).

All of this being said though, that's just the background of why it's nearly impossible to say "you need this much bitrate for this type of thing" -- if you're dealing with bitrates specifically, it's usually a situation where you're having to deal with a limited bandwidth, and you need to keep as much quality as possible. That situation would determine things like your framerate/resolution based on the complexity of what you're sending the encoder... and as mentioned, in-game settings will do very little to affect this.

When bitrate limits are not that much of a concern though, you can instead tell the encoder to focus on hitting a quality metric -- this is what CRF and CQP do (each encoder has its own acronym and method of doing this... I think QuickSync has ICQ as well?). These will try to match a specific output quality, and use as much data as necessary to meet that quality for the final output. The lower the number, the higher the quality -- generally, 20-25 is recommended for "good enough", but it's always up to you for how much quality you want to retain.
 

koala

Active Member
In general:
  • the higher the resolution, the larger the amount of data that has to be processed. Double the resolution means 4 times the data.
  • the higher the framerate, the larger amount of data has to be processed. Double the resolution means double the data.
This tells about the raw picture data and the relation between resolution, fps and amount of data.
With compression (encoding), this raw data is converted to the actual data you store or stream. The relation between resolution, framerate and amount of data is roughly the same as with the raw data. Exception/rule of thumb: the larger the amount of data, the more efficient the encoding, so it's not exactly 4 times / 2 times any more but a bit less.

The higher the compression, the smaller the encoded data.
The lower the original image detail and complexity of the game, the smaller the encoded data (fast shooters often have less image complexity, thus compress better than highly complex sophisticated graphics of some RPGs)
The smaller the encoded data, the better the image quality (because you can put more information into the available space)

With recording, the amount of data is mostly unimportant, because usually you have unlimited disk space.
With streaming, the amount of data is crucial, because you have limited bandwidth.

So you need to differ between recording and streaming with your settings.
With recording, you want highest quality and don't care about disk space. So you are looking for settings that produce encoded material that looks indistinguishable from the original game, and still doesn't waste disk space.
With streaming, you are constrained by available bitrate. Not by your upload bitrate alone, also by the bitrate your viewers are able to download.

In the end, what settings do you need?

For recording, you use a quality based rate control with your encoder such as CQP (Nvenc), CRF (x264) or ICQ (Quicksync). The crucial parameter is the quality parameter, that you vary between 18-25. Lower values mean higher quality, so this parameter controls how much detail is removed. The quality of the resulting video is guaranteed and doesn't care about high or low motion scenes. This is what you need for archiving or postprocessing.
For recording with the purpose of archiving or postprocessing, you never use a bitrate orientated rate control (CBR, VBR), because these modes lower quality for high motion scenes to fulfill the bandwidth constraint, and they bloat the file for low motion scenes for no reason, because they hold up consuming the given bandwidth, even if absolutely nothing is moving on the screen and you only record a still image.

For streaming, some rules of thumb apply. There are recommendations from streaming providers for bandwidth:
https://stream.twitch.tv/encoding/
https://support.google.com/youtube/answer/2853702

tl;dr:
If you want to record with OBS, use simple mode, Nvenc as encoder if you have a Nvidia card, and the quality setting of "Indistinguishable Quality". This is the correct setting regardless of resolution and fps. Internally, this uses the CQP rate control with a customized appropriate CQ value according to resolution.
If you want to stream with OBS, see above streaming provider recommendations.
It may be necessary to reduce resolution or game fps, if the computer isn't able to keep up with processing the data, resulting in laggy or choppy video.
 

frozywuzy

New Member
Thank you so much for the well detailed explanations. I could not have asked for something better.
You guys are the best. Have a great rest of your day!
 

koala

Active Member
It was a good question. You are the first poster since ages who is looking for some understanding, while basically all others simply ask "tell me what settings I should use", which is not an interesting thing to even look at.
 
Top