Resource icon

NVIDIA NvEnc Guide

The objective of this guide is to help you understand how to use the NVIDIA encoder, NvEnc, in OBS. We have tried to simplify some of the concepts to make this accessible to a wider audience. If you think we can improve any part of this guide or find any issues or mistakes, please post below and we will be happy to update it.

  1. NvEnc in OBS
    1. About NvEnc
    2. Recommended Settings For Streaming
    3. Recording Settings
  2. Additional Info on Video Encoding
  3. How to Debug Problems
  4. Recommendations for Evaluating Encoders

NvEnc is NVIDIA’s encoder. It’s a physical section of our GPUs that is dedicated to encoding only. This means that your GPU can operate normally regardless of whether you use this region to stream or record. Other encoders, such as x264, use your CPU to encode, and fight for resources with other programs such as your game. That’s why using NvEnc allows you to play games at a higher framerate and avoid stutters, giving your viewers a better experience.

In the last 2 GPU generations we have made great improvements to NvEnc to make sure that the output quality is best in class. NvEnc in the GTX 10-Series cards provides superior quality than x264 Very Fast, the most commonly used x264 preset. And in the new RTX 20-Series, NvEnc is above x264 Fast and on par with x264 Medium, a preset that only a very limited of professional streamers can use, as it requires a very expensive dual box system.

One thing that is great about NvEnc is that it’s defined per generation. Our cards carry the same NvEnc chip whether you get the top of the line card (e.g. RTX 2080Ti) or a more affordable one (e.g. RTX 2070).
NvEnc also benefits from our own NVIDIA Video Codec SDK, an advanced set of tools that help improve the encoded quality and that we constantly update to help you get the best out of your NVIDIA card.

Finally, if you are using an NVIDIA GPU you have access to GeForce Experience’s Game Filters, which allow you to further improve the image quality of your viewers via software by enhancing color, adding sharpness, amongst other things.

Bitrate, Resolution and Framerate
Remember that encoding is all about compressing images. The smaller the size of the image, the less we have to compress it and the more quality it keeps. While the same applies for framerate, a viewer can really notice a drop in FPS but not so much in resolution, so we will always try to stream at 60 FPS.
First, run a speed test to determine your upload speed (e.g. Speed Test). We want to use around 75% of your upload speed, as the game and other programs such as Discord will also fight for bandwidth.
Then, we will determine the resolution and FPS that we can use for such bitrate. Most streaming sites have recommendations (i.e. Twitch, Youtube) on what to use. These are ours:


  • Important Note for Fast Content: if you are going to stream high movement scenes (i.e. Racing games, some Battle Royale games, etc.) we highly recommend reducing your resolution slightly. Fast moving content cannot be compressed as much, and suffers from more artifacting (encoding errors). If you reduce the resolution, you reduce the data that has to be encoded, and the resulting viewer quality is higher. For example, for Fortnite may streamers decide to stream at 900p 60 FPS.
  • Important Note for New and Upcoming Streamers to Twitch. Twitch only offers transcoding to Partners. This means that whatever you stream at is what your viewers will see. Many viewers don’t have powerful connections or may be watching you on phones over 3G/4G. You may want to consider streaming at 720p60 to lower the bandwidth required to see your channel and grow easier.
Now we can go to OBS and enter the settings. The resolution and framerate are selected in Settings, Video tab. If you have selected a smaller output resolution than your canvas, you have 2 options:
  • Video tab with Downscale Filters. This allows you to select a downscale filter that will provide a small image sharpness enhancement, at the cost of some encoder workload. NvEnc is very efficient and typically runs below 50% utilization, so we recommend using this with the Lanczos, 32 samples we recommend using this with the 32 samples option.
  • Output tab. If you want to do this, leave the same canvas and output resolution in the Video tab, and just select your output resolution in the Output tab.
Now we are ready to go adjust our settings!

Output Tab Settings
In output mode select Advanced. This gives you access to all the settings. Let’s start!
  • Encoder: Select NVIDIA NVENC
  • Enforce Streaming Service Encoder Settings: uncheck this, we want to control some settings as streaming platforms usually allow you to deviate a bit from their default settings.
  • Rescale output: Uncheck this. As per the above, for absolute best quality do not use this, instead use the re-scale functionality of the Video tab. If you are experiencing encoder overload issues, use this instead.
  • Rate Control: Select CBR. This determines the rate at which frames are going to be encoded. For absolute best quality it would be better to let the encoder determine the bitrate needed for each frame (i.e. Variable Bitrate, or VBR), but streaming platforms struggle with such fluctuations and it results in bad image quality. Therefore, we use Constant Bitrate, or CBR.
  • Bitrate: put 75% of your upload speed, as we calculated above.
  • Keyframe Interval: Set to 2. Streaming platforms may limit what you can select here, andmost require a setting of 2. You can read more on Key-frames below.
  • Preset: Select High Quality. This determines how much load we put on the encoder to get more quality. NvEnc is incredibly efficient, so most users will be able to use the High Quality preset. If you get encoder overload issues, consider uncheck 2 pass encoding, and if the issue persists, you can change this to Default or High Performance.
  • Profile: Set to High. Profile determines a set of settings in the H.264 Codec. It doesn’t impact performance, and gives access to a set of features that are key to Streaming, so this should always be set to high.
  • Level: leave at Auto, as the encoder can adjust this automatically for you. You can read more about Levels here.
  • 2 pass encoding: Check. This allows the encoder to look at each frame twice and fix issues. It nearly doubles the load on the encoder, so if you have overload issues this will be the first thing we will want to uncheck.
  • GPU: if you are running a dual GPU system you can select which card you want to use for encoding.
  • B-Frames: Set to 2. Streaming platforms may limit what you can select here, and most allow 2-4. You can read more on B-Frames below.

Video Tab Settings
Select the canvas, output resolution and FPS that we determined at the beginning of this section. Remember to select 32 samples in the Downscale Filter if you are using an Output resolution different from the Canvas resolution.

Important Note: for the best possible image quality, we have found that playing at (and setting the Canvas to) 4K and streaming at 1080p with the 32 samples downscale filter yielded the best possible results, although it required 6,000+ bitrate.

Advanced Tab Settings
This tab can normally be left untouched, but it has a couple of settings that we will cover:
  • Process Priority: this can be set to High if you are experiencing issues where OBS struggles when it’s a background application but is fine when it’s your main window. Consider disabling Windows Game Mode as well, as it reduces the performance of other Apps when you are gaming.
  • YUV Color Space: Set to 601. this controls the possible colors that are shown on the screen. Increasing it will enable a wider range of colors that will be able to copy the source content better, but may decrease overall quality as it hurts compression.
  • YUV Color Range: Set to Partial. Similar to the Color Space, the color Range reduces the colors that can be produced, with the same effects. If you put full you get more fidelity, but you hurt compression and overall quality may suffer.
What you will notice is that with the default settings the images look washed out in comparison to how they look on your screen, but if you change the Color Space to 709 and Color Range to Full, the images will be more vivid. The problem is that you hurt overall compression, so the quality of your stream will suffer with fast changes.

And there you have it! We hope this helps you improve your stream quality and reach your goals. Leave us a comment if this worked for you or if you’d like us to update the guide or include anything else. Happy streaming!

OBS offers a wide range of options for recording. Since you are no longer limited by upload speed or platform limitations, you are free to choose from a wider range of options. You can basically run the resolution and FPS that you want, and crank up the encoding settings to give you better image quality.

To do this, go to Settings, Output tab, and then select recording at the top. The main differences here from streaming are the following:
  • Recording Format: you can select your file’s format. MP4 tends to work well for most uses.
  • Rate Control: you can now consider using VBR, CQP or even Lossless. Lossless provides the best quality but the file size is absurdly large so it’s not useful to most people. Between CQP and VBR it’s a matter of choice, either are good options.
    • If you select VBR, select the desired output bitrate. For a flawless recording, we recommend 40,000 bitrate for 1080p.
    • If you select CQP, you can mostly leave everything intact, the default options work quite well.

A 1080p video stream, what most gamers use to game, is too big to stream on. To be precise, an uncompressed 1080p video stream at 24-bits is 24×1920x1080x60=2.99 Gbps/s, which is about 3,000 Mbps. Most streamers stream at 3 to 6 Mbps. How is this possible?
What most streaming services such as Youtube, Twitch or Netflix do is use a set of techniques to send less information but still keep an acceptable quality. The main ones are described below.
Chroma Subsampling
Chroma subsampling, also known as YUV, is a color encoding system used in video industry instead of RGB. Y stands for the luma component (the brightness) and U and V are the chrominance (color) components. Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance. The most common subsampling mode is YUV420 when Cb and Cr channels are only sampled on each alternate line in this scheme, the vertical resolution is halved:

A video codec is a compression methodology. There one used for streaming is H.264, and for recording you can also use HEVC or H.265. H.265 provides better compression technology, but most companies don’t allow people to stream with it due to licensing issues. However, you can upload videos recorded in H.265 to many sites such as Youtube.
Important - Don’t confuse a codec with an encoder. An A video encoder is a method of program or hardware device transforming images into compressed binary streamcontent into a according to codec language methodology. For example, NvEnc is hardware device that can encode to H.264 or HEVC/H.265; x264 is a software encoder to H.264.
A video codec is a compression methodology. There one used for streaming is H.264, and for recording you can also use HEVC or H.265. The main difference is that H.265 uses significantly less bandwidth, but has a license fee, so most companies don’t allow people to stream with it. However, you can upload videos recorded in H.265 to many sites such as Youtube.
Important - Don’t confuse a codec with an encoder. An encoder is a method of transforming content into a codec language. For example, NvEnc can encode to H.264 or HEVC/H.265; x264 is a software encoder to H.264.

Types of Frames: Key Frames, P-Frames, B-Frames
When you encode video, you know that the next image is going to be very similar to the one you just encoded. Not that many things change on the screen. Therefore, what you do is send certain frames completely (Key Frames), and send partial info for the others, thus reducing the bandwidth needed.
The main types are:
  • Key‑frames (or I-Frames) are full frames and take a lot of data. In OBs, you select how often they appear (in seconds). A higher number means less Key-frames, which leaves more bandwidth available for higher quality encoding. However, Key-frames helps us recover quality after a high movement scene (you notice that the scene quality suddenly pops), so we want a balance between the 2.
  • B‑frames reference both previous and next frames and save the most data. They help maintain quality in high movement scenes as they can keep up better with content.
  • P‑frames reference the previous frame and save data. OBS does not let us select these because they are whatever frames are not Key-frames or B-frames.
You can see below a comparison of the data that the different types of frames use:

Canvas Size and Framerate
Less is more. At the end of the day, video encoding is about compressing a large file into a smaller one. If your original image has a smaller resolution, there is less to compress, so we don’t have to throw away as much data and can keep higher fidelity. Similar to the Canvas size, the framerate is additional information. Reducing it from 60 to 30 halfs the bandwidth required to encode and allows us to keep higher fidelity. If you have to choose between the 2, in our experiments, viewers have a better experience with a higher and more stable framerate (i.e. 720p at 60 FPS is better than 1080p 30 FPS.

Streaming can be very complicated, but it’s particularly hard to debug. There are many things at play when you stream, so we are going to try to provide you some help on how to identify what is going wrong and how to fix it.
Streaming uses the following components:

  • Your PC: This includes hardware and software.
  • Local Internet: WiFi or cabled internet + your Router.
  • Your connection: To your service provider.
  • The platform: Twitch, Youtube, Mixer, etc.
  • Viewer’s Internet: Typically Wi-Fi, but can also be 3G/4G.
  • Viewer’s device: keep in mind 35% of Twitch viewers are on mobile.
If something is failing, we want to first identify what component may be failing, so we don’t go crazy trying to fix something that was never broken in the first place. Typically this means that the first test you should do is a Speed Test to make sure that you don’t have internet problems in your local internet or your connection. Second, make sure the platform hasn’t issued an alert that they are down or are experiencing problems. Then based on what error you get, you start looking at one thing or another in your PC.

Common Error Types
Image looks very washed out. Most likely you are trying to push too much quality through too little bitrate. Consider reducing the resolution and frame rate and try again. If quality improves, then adjust until you find your sweet spot.
Encoder Overloads. To identify this issue, open the Windows Task Manager, go to the Performance tab and click on your GPU. You will be able to see the load on each of the sections of the GPU. If the Video Encode section is above 90% you may be getting encoder overload issues. To fix it, we recommend trying each of this in order until your load is below 90%:

  • Disable 2 pass encoding in OBS, Output Tab.
  • Change the Encoder Preset to Default, and if it still overloads, to High Performance.
  • Reduce your output resolution.
Stream is missing FPS. For FPS issues, OBS includes an FPS counter at the bottom right of the program. If you have FPS issues make sure that both your content and OBS are running at or above your desired FPS. If your content is the problem, lower the game settings so you get more FPS. If OBS is losing FPS, try the encoder overloads solutions stated above. If neither of these are the issue, the problem is likely on the network.
Stream looks like a slideshow. This can happen when the encoder is not able to keep up. We have observed this behavior when trying to push 4K content under a high workload. Closing applications in your system and trying to reduce overall performance should help fix it, but you may also want to try the Encoder Overloads solutions.

For those of you that want to benchmark NvEnc and other encoders, we added some information on how we recommend to test encoders.
A meaningful video encoder comparison requires:
  • Proper methodology
  • Correct encoder settings
  • Representative set of test video sequences
In order to compare different video encoder implementations, it is essential to follow a proper process. The most common mistake is to ignore the actual bitrate produced by the encoder. Consider the following example where Encoder 1 and Encoder 2 both encode the same video at a specified bitrate of 5 Mbps. The PSNR for Encoder 1 is 35.5 dB and for Encoder 2 it is 35.2 dB. Encoder 1 appears to produce higher quality. But when looking at the file size, it turns out the bitrate for Encoder 1 was 5.2 Mbps and for Encoder 2 it was 4.9 Mbps. So while Encoder 1 produced higher quality, it also required more bits. Because the bitrates are not identical, the quality cannot be fairly compared based on a single measurement.
The recommended methodology that is widely accepted by video compression experts is to use the Bjontegaard Metric. A sample implementation using Excel VBA code is available on GitHub here. Note that this sample implementation is for reference only. It is neither maintained nor officially endorsed by NVIDIA.
The Bjontegaard Metric works with any quality metric (PSNR, SSIM, VMAF, …) but for simplicity let us focus on PSNR. The basic idea is to encode each test video at different bitrates and measure PSNR (distortion D) and output bitrate (rate R). The resulting data points are then interpolated to produce a so-called RD-curve with rate R on the x-axis and distortion D on the y-axis:

The Bjontegaard Metric computes the average distance between these two RD-curves. The result is the average gain in PSNR (or whatever metric is used) over the tested bitrate range. And the Bjontegaard Metric also computes the average percent in bitrate savings at iso quality. Note in the plot above that thanks to the interpolation it is no longer necessary for the two encoder to produce identical output bitrates.
In addition to using the right methodology, it is also important to use the correct encoder settings. These will be different depending on use case. For instance, when recording a video to file larger bitrate variations can be tolerated compared to live streaming through the Internet. Great care must be taken when relying on an encoder’s default settings. Encoder 1 may use an infinite VBV buffer size by default, while Encoder 2 may use 2x the specified bitrate. In this case Encoder 1 would have a significant, unfair advantage.
For live streaming, the following parameters may be used with FFmpeg:
  1. NVENC: -i input.yuv -c:v h264_nvenc -preset medium -profile:v high -r 60 -g 120 -bf 2 -b:v <bitrate> -maxrate <bitrate> -bufsize <bitrate*2> -rc cbr_hq -rc_lookahead 8 -temporal-aq 1 -b_ref_mode 2
  2. x264: -i input.yuv -c:v libx264 -preset fast -tune psnr -r 60 -g 120 -bf 2 -b:v <bitrate> -maxrate <bitrate> -bufsize <bitrate*2> -rc cbr
Finally, it is important to measure quality over a set of representative test videos. Some video encoders perform well on some video sequences (e.g. low motion), while performing worse on others. Proper conclusions are not possible based on one video sequence only.
First release
Last update
0.00 star(s) 0 ratings