[mythtv-users] Deinterlacing quality - where's the difference?

Jim Abernathy jfabernathy at gmail.com
Thu Nov 19 11:14:29 UTC 2020


On 11/19/20 5:01 AM, Mark Kendall wrote:
> On Wed, 18 Nov 2020 at 13:31, James Abernathy <jfabernathy at gmail.com> wrote:
>> I've been chasing a jitter problem, that while minor is very annoying to me.
>>
>> My last test where I had the jitter was using the NVDEC profile, all hardware decode and deinterlacing, and High quality set on both single and double deinterlacing quality.
>>
>> Still had a small amount of jitter but noticeable. So I changed the deinterlacing qualities to medium and the jitter went away.
> I meant to follow up on that - glad you resolved it.
>
>>   There was no noticeable difference to me in the video except the jitter is now gone. BTW, the hardware is Ryzen 5-3600, B550M motherboard, Nvidia GT-1030 fanless video card. OS is Linux Mint 20, Mythtv v0.31-118.
>>
>> So that brings up the question, what quality changes when you change deinterlacing quality from high to medium to low?
> With deinterlacing, there is traditionally a trade off between quality
> and performance - i.e. the better deinterlacers will tax your hardware
> more. There is also no magic bullet when deinterlacing (more so with
> the lower quality ones) - some material looks better with certain
> implementations (and quality is obviously subjective).
>
> A lot of the hardware deinterlacers (VDPAU, NVDEC, VAAPI etc) are a
> closed book - we don't really know what they are doing inside, though
> we have a reasonable idea.
>
> With reference to our own software and OpenGL deinterlacers, from
> lower quality up (and hence increasing complexity):-
>
> - the most basic deinterlacer is onefield/bob - which just takes the
> current field and scales it to the full frame height. Simple and
> effective (but introduces 'bobbing').
> - linear blend - there are different approaches, but ours use the
> current field and a linear interpolation of the lines above and below
> for the 'non-current' field. Again fairly simple and cheap but less
> bobbing.
>
> - Note: both onefield and linear blend purely use the current frame.
>
> - yadif (software) and kernel(opengl) both blend data from multiple
> fields and lines to fill in the missing 'non-current' lines - more
> computationally and memory intensive.
>
> The above methods always deinterlace every pixel of every frame (or at
> least every non-current field) and as a result there is some loss in
> picture quality in static areas - the more advanced hardware
> deinterlacers ('motion adaptive') will decide which areas of the
> screen are static - and won't deinterlace them.  The best
> deinterlacers will also estimate the direction of motion ('motion
> compensated') to guide the code to pick the best pixels to interpolate
> from. I've long toyed with adding a software version of the latter as
> we can ask FFmpeg to export motion vectors when decoding - just not
> sure it is worth the effort:)
>
> Regards
> Mark


Thanks for the explanation.  I think whatever NVDEC is doing on Medium 
Quality de-interlacing is doing, I find acceptable.

Jim A




More information about the mythtv-users mailing list