[mythtv] Deinterlacer settings

Paul Gardiner lists at glidos.net
Sun Feb 17 21:23:03 UTC 2019


On 16 February 2019 10:26:18 GMT+00:00, Mark Kendall <mark.kendall at gmail.com> wrote:
>I'd like to propose a change to how deinterlacers are configured as
>the current setup has become unwieldy to the extent that it is almost
>unmaintainable.
>
>Proposal:
>
>- remove existing specific, named deinterlacer selections for main and
>fallback.
>
>- replace with something along the lines of:
>- new settings:
>    "Deinterlacer quality (normal)": None, Fast, Medium, High, Advanced
>                "Prefer GPU deinterlacers": Yes/No
>                "Prefer GPU driver deinterlacers": Yes/No
>"Deinterlacer quality (double rate):" None, Fast, Medium, High,
>Advanced
>                "Prefer GPU deinterlacers": Yes/No
>                "Prefer GPU driver deinterlacers": Yes/No
>
>I believe that gives the same flexiblity as the current settings
>without tying the code to specific named deinterlacers. Remember these
>are per profile settings.
>
>Selecting 'Prefer GPU deinterlacers' would obviously select shader
>based deints over CPU based and 'GPU driver deinterlacers' would
>prefer VDPAU/VAAPI etc over all others. The code can then make some
>informed decisions on what deinterlacer to use and at what stage in
>the decode to presentation process, as well as falling back where
>needed.
>
>With this setup, under the hood, the deinterlacer selections for Fast,
>Medium, High and Advanced would look something like:
>
>CPU - Onefield, LInear blend, Kernel, Yadif
>OpenGL/D3D - onefield, linear blend, kernel, new motion adaptive shader
>VAAPI - Bob, Weave, Motion adaptive, Motion compensated
>VDPAU - Bob, Temporal, Spatial, ???
>OpenMax - linedouble, fast, advanced, ??
>
>Where onefield and bob are interchangeable as 1x and 2x versions.
>
>Background
>
>We now have a range of deinterlacer options, not just in terms of
>CPU/GPU/driver based but also in terms of at what stage deinterlacing
>occurs.
>
>As an example, with VAAPI decode only (VAAPI2), deinterlacing could
>occur in the decoder using either CPU or VAAPI based deinterlacers or
>at playback using the CPU or GLSL shaders (you could even use VAAPI
>again at this stage). With the new VAAPI zero copy code in the render
>branch, the current setup cannot cope with a VAAPI profile that
>expects to use VAAPI deinterlacer names but OpenGLVideo is actually
>presented with raw video frames that (currently) need to pass through
>the GLSL shaders.
>
>The current code is inflexible - especially with its use of strings to
>explicitly identify each deinterlacer - and has started to break. A
>much simpler and more flexible approach would be to use a simple
>flag/enum that encapsulates the user preferences.
>
>Thoughts welcome!
>
>Regards
>Mark
>_______________________________________________
>mythtv-dev mailing list
>mythtv-dev at mythtv.org
>http://lists.mythtv.org/mailman/listinfo/mythtv-dev
>http://wiki.mythtv.org/Mailing_List_etiquette
>MythTV Forums: https://forum.mythtv.org

Can you say more about in what way the current system is inflexible and is breaking?

It seems to me a disadvantage of what you propose is I'd have to try 20 different settings (or at least 17) to explore all possibilities. Also some changes in settings would not change which deinterlacer was selected, whereas other setting changes would select subtly different deinterlacers. That's unhelpful when swapping back and forwards between settings trying to decide which you prefer. Lastly there is at least one deinterlacer that doesn't fall into those categories: there is one that has the purpose of driving a tv's own deinterlacer and is required only for synchronisation -perhaps rarely used these days.


More information about the mythtv-dev mailing list