OBS shows 1080i60 video but records alternate frames duplicated

DayGeckoArt

Member
I've been trying to record component 1080i60 video and wondering why the recording looks so much worse than it does on the screen. It turns out my Blackmagic Intensity Pro 4k is giving 1080i60 video to OBS and OBS is displaying it properly as 60 frames per second with each interlaced "half" shown on the whole screen. But the recording is missing half the frames. Instead I get one half of the interlaced video, and it's duplicated for two frames

I discovered this by recording my screen showing OBS with my Lumix G85 camera set to 1080p60 (direct to SD card) and there are definitely 60 frames per second being displayed.

I also tried plugging my G85 into the HDMI port on the Blackmagic. It outputs 1080p60. This works fine! The issue seems to be specific to 1080i60.

And to confirm it, I got the same behavior with an Elgato Camlink on two separate computers, my gaming PC and my laptop! 1080i60 gets half frames dropped, 1080p60 works fine.

I uploaded samples to Youtube:
https://youtu.be/HJpRwNLI0mA
https://youtu.be/mIsl-wiG8_o
 

Attachments

  • 2022-01-17 13-10-13.txt
    13 KB · Views: 23

DayGeckoArt

Member
New things I've learned:

1080i60 with Blackmagic Media Express is possible but only with Quicktime Uncompressed 8-bit YUV. A 5 minute video comes out to 40 gigabytes! So this is a semi-solution, at least for those of us with Blackmagic capture devices.

OBS can de-interlace the video and record as 1080p30. This is another semi-solution because it's not the original data being recorded. From what I gather, the different methods available in OBS have different advantages for different types of content.

If there was a deinterlacing method that just recorded full 1920x1080 for every frame with the lines doubled vertically, that would be great
 

DayGeckoArt

Member
I got a good deinterlaced recording at high bitrate x264 so that's great.

But I'd also like to try recording the raw HDMI stream. Does anybody know if this is what happens when you set the output format to raw video? That gives you a .yuv file but I don't know if it's literally just the data that OBS gets from the capture card. If so, that would at least be a good method to get the best data possible and then deinterlace/compress it later
 

DayGeckoArt

Member
I've been trying to figure out which deinterlacing method is best. But I found this document for VLC which describes the same methods listed in OBS. None of them is actual deinterlacing https://wiki.videolan.org/Deinterlacing/

They all interpolate data that doesn't need to be interpolated, based on the assumption that the source is real 1080i60. For content that was originally 1080p30 and converted to interlaced, I can't find any method that simply combines lines from two frames to produce a single progressive frame. Even if I record the interlaced video using Blackmagic Media Express, I can't find any software that can simply combine lines to produce the original progressive video. Even FFMPEG doesn't have any deinterlacing method that actually deinterlaces.

Am I missing something?
 

Suslik V

Active Member
Deinterlacing far more complex than combine 2 images together. The second half of the picture (second field) represents next (in time) event of the recording. So, by averaging the frames there will be mix of the pictures from the "future" and the "present". The result will look ugly on fast moving objects. Thus, you should learn how the interlacing was done first...
 

Suslik V

Active Member
Some thoughts (travel back in time).

Long time ago the TV sets were CRT based. And they used interlacing methods to display pictures. In old CRT devices when the 1st field form the current frame is displayed, the 2nd field from the previous frame is still on the display (phosphor glow and eyes inertia). And only then the old 2nd field is "replaced" with the current one. Thus, the image is always smoothed in timeline. The lifetime of the each field from the human eye perspective = length of the both fields (or image will flicker for you). You always "see" both fields, that goes in in the next order: 1+2+1+2+1... in interlaced CRTs.

Also, kind of "temporal smooth" filter needed for good emulation of the phosphor (sometimes it is done for expensive measurement equipment like oscilloscopes). For example, the screen afterglow was visible for sometime for the TV set that was placed in the dark room.

Back to the future.

The interlacing can be adaptive (for best experience). The interlacing algorithm can change on scene change. Result - is the two fields that is "build" (rendered) from the source. So, the true restoration of the progressive source from the interlaced data is difficult. If this is still image - then it doesn't matter. But moving pictures is different thing.
 

DayGeckoArt

Member
Some thoughts (travel back in time).

Long time ago the TV sets were CRT based. And they used interlacing methods to display pictures. In old CRT devices when the 1st field form the current frame is displayed, the 2nd field from the previous frame is still on the display (phosphor glow and eyes inertia). And only then the old 2nd field is "replaced" with the current one. Thus, the image is always smoothed in timeline. The lifetime of the each field from the human eye perspective = length of the both fields (or image will flicker for you). You always "see" both fields, that goes in in the next order: 1+2+1+2+1... in interlaced CRTs.

Also, kind of "temporal smooth" filter needed for good emulation of the phosphor (sometimes it is done for expensive measurement equipment like oscilloscopes). For example, the screen afterglow was visible for sometime for the TV set that was placed in the dark room.

Back to the future.

The interlacing can be adaptive (for best experience). The interlacing algorithm can change on scene change. Result - is the two fields that is "build" (rendered) from the source. So, the true restoration of the progressive source from the interlaced data is difficult. If this is still image - then it doesn't matter. But moving pictures is different thing.

The problem is OBS has no way to record interlaced video, and also has no deinterlacing. The "deinterlacing" options available all simply throw away half the frames and interpolate the missing lines.

A simple weave option would work for most of what I'm recording, which is 30P video played back on a cablebox that turns everything into 1080i. More advanced options would be needed for real 1080i60 content
 

Suslik V

Active Member
The "deinterlacing" options available all simply throw away half the frames...

Only simplest of them work this way. The "Blend" (and down to the list - "Yadif") - the info from the both fields are used to generate progressive frames.


A simple weave option would work...

You are talking about "Blend" method of the OBS deinterlacing algorithms. But in most cases the "Yadif" gives better results. If you are unsatisfied with the final result then you need other solutions (not OBS).
 

DayGeckoArt

Member
Only simplest of them work this way. The "Blend" (and down to the list - "Yadif") - the info from the both fields are used to generate progressive frames.




You are talking about "Blend" method of the OBS deinterlacing algorithms. But in most cases the "Yadif" gives better results. If you are unsatisfied with the final result then you need other solutions (not OBS).

I learned from another forum that yadif 2x is probably the best solution because it doesn't throw away half the frames like 1x does. You get 59.94fps and each frame gets the half fields from the original stream and the other half are interpolated, so the original data is all there
 
Top