How do I make OBS only record frames it receives?

Yamakuzure

New Member
Hello everybody!

First of all, thanks for this wonderful software. I previously recorded using Nvidia RTX Experience (formerly "ShadowPlay"), and was fed up with it maxing out at around 51MBit and only supporting YUV420 with reduced color range.
So a friend of mine suggested I try out FRAPS and OBS, and the latter just made me fall in love with it on first sight. ;-)
Now I can record in YUV444, full range. 8^D

Although I am posting here, I have a dual-boot machine and do some recording on Linux (Gentoo) as well, now also with OBS. Wondeful experience!

But I have one problem, and it is more prominent under Windows, so I am asking for some hints here.

Sometime my in-game FPS drop below 60 on very intense scenes. Normally the remain over 50, but can (very rare though) drop to 46-48.
It is only for a moment and not a big deal. At least it never used to be.

So, to edit my clips into final videos, I use Shotcut. The first thing I do after adding all needed clips to my playlist, is to convert them to MKV/UTVideo to make editing as reliable as possible. While doing this I am used to set the output to fixed 60 FPS, using blend mode to create missing frames.

However, OBS already substitutes missing frames by duplicating the previous frames, and that is a catastrophe for me, as it creates stutter where it does not need to be. Unfortunately it is next to impossible to remove the duplicates afterwards. At least I do not know of any feasible way to do this.

So my question is: Is there a way to tell OBS to *NOT* care about missing frames and just put onto disk what it gets? Like a hidden option or ini parameter or something like that?

Thank you very much in advance!
 

koala

Active Member
To detect and remove duplicated frames from a video, see this stackoverflow answer:

With OBS, this isn't possible. OBS decouples input fps from output fps while compositing. Whenever a frame needs to be output according to the output fps, every source is read for its current frame. If that frame hasn't been changed since the previous frame, it's simply re-used.
 

Yamakuzure

New Member
Thanks again, koala, that link was gold!

After many experiments and research, I came up with the following command :

Bash:
ffmpeg -threads 1       \
    -i "<INPUT_FILE>"   \
    -vf mpdecimate,scale=in_range=full:out_range=full \
    -vsync vfr          \
    -c:a aac            \
    -c:v h264_nvenc     \
    -preset slow        \
    -profile:v high444p \
    -coder cavlc        \
    -b:v 0              \
    -color_range 2      \
    -rc constqp -qp 12  \
    -qmin 1 -qmax 16    \
    -rc-lookahead 32    \
    "<OUTPUT_FILE>"

It does the job very well. When I takle the output and convert it in shotcut with adding blend frames, I can see by single-stepping, that the duplicated frames in the source are gone, and that they have been replaced by blend-over frames.

However, I am not totally happy with my command line above, as it seems I have not quite fully understood all options.
Although I have set -rc constqp -qp 12, the output always presents the encoding as:
Code:
Stream #0:0: Video: h264 (High 4:4:4 Predictive) (H264 / 0x34363248), yuv444p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 60 fps, 1k tbn
That looks almost like it should be, but (maybe I misunderstood something) shouldn't it be q=12-12?
Adding -qmin 1 -qmax 16 didn't change anything. Well, those are for vbr_hq if I am not much mistaken, so being ignored in constqp is kinda expected. ;-)

...maybe I am just worrying too much...
 

koala

Active Member
I'm sorry, but I cannot help further, since creating custom ffmpeg command line options takes much work - tinkering, research, trial & error.

The only thing I can comment is where in the command line is this interpolating taking place? As far as I see it, you're using exactly one filter to manipulate the video stream (mpdecimate), and everything else is just configuring color space and encoder options.
You're neither using the minterpolate nor the framerate filter, so you're probably not interpolating at all - duplicated frames are simply removed with with using -vsync and mpdecimate.
 

Yamakuzure

New Member
You are right, this step only cleans up the video and puts it into a matroska container. The interpolation is done in a second step, converting to utvideo (lossless):
Bash:
/usr/bin/ffmpeg -threads 1      \
    -i "<INPUT_FILE>"           \
    -max_muxing_queue_size 9999 \
    -map 0:V? -map 0:a? \
    -map_metadata 0     \
    -ignore_unknown     \
    -vf scale=flags=accurate_rnd+full_chroma_inp+full_chroma_int:in_range=full:out_range=full,minterpolate='mi_mode=mci:mc_mode=aobmc:me_mode=bidir:vsbmc=1:fps=60' \
    -color_range jpeg  \
    -f matroska        \
    -codec:a pcm_f32le \
    -codec:v utvideo   \
    -pix_fmt yuv444p   \
    -y "<OUTPUT_FILE>"
I know I can combine the filters by simply specifying each -vf after one another. But before I try that, I'd like to first test each conversion separately.

So far it works pretty well. Testing a one-step-conversion is next on my todo list.

Interestingly my last test had issued 100 buffers queued in out_0_0, something may be wrong., but from all I could find on the internet, this is nothing to worry about as long as the video ends up being correct and healthy.
 

Yamakuzure

New Member
I know I can combine the filters by simply specifying each -vf after one another.

That was actually incorrect. I read a bit about distinct filter chaining, and the correct call would be:
Code:
-vf [in]mpdecimate,scale=in_range=full:out_range=full[middle];[middle]scale=flags=accurate_rnd+full_chroma_inp+full_chroma_int:in_range=full:out_range=full,minterpolate='mi_mode=mci:mc_mode=aobmc:me_mode=bidir:vsbmc=1:fps=60'[out]
Wow... What a monster!
I'll try that next and see if the result is identical to what my 2-step-conversion achieved.
 

Yamakuzure

New Member
Okay, the 2-step conversion is different from the one-step conversion. The latter, using the monster-filter-line above is actually more exact.

In several occasions where the source video has duplicated frames, the first frame and the next regular frame are exactly the same on the source and one-step-converted file, with the latter having a new frame in between, that looks just like it had be recorded like that.

Although the 2-step-converted file also has the duplicates removed and substituted, the frames go a bit off, compared to the source.

So my final conclusion is, that if you can chain filters, do it. Don't store intermediate files, just let ffmpeg handle it.
 
Top