[mythtv] fixes/31 branch created!

Mark Kendall mark.kendall at gmail.com
Tue Feb 11 19:30:29 UTC 2020


On Tue, 11 Feb 2020 at 15:31, Mike Bibbings <mike.bibbings at gmail.com> wrote:
> I would like to run tests on Raspberry Pi 2/3 and 4.
>
> I assume the local build process, is the same as for Ubuntu/Debian i.e.
> install build depends using Ansible and use ./configure (no additional
> options) for mythtv.
>
> At present on a Pi 4 (4GB) with a default install of Raspbian Buster,
> the best performance I can get is with V4L2 Codecs, but even this has
> issues particularly with HD (UK DVB-T2 ) over HDMI 1920x1080 @50Hz.
>
> Video Playback Profile defaults to MMAL.
>
> I maybe missing some necessary settings, there are so many possible
> options, so I would like to use the same settings as used during
> development, can anyone provide these ?
>
> The settings needed for pi2/3 and 4 are:
>
> mythfrontend Video Profile
>
> any changes required to /boot/config.txt
>
> Pi 4 default uses dtoverlay=vc4-fkms-v3d with max_framebuffers=2 i.e.
> running Fake KMS, Pi 2/3 do not use the fkms overlay.
>
> Once I have everything working I can update the mythtv-light wiki
> page(s) for Raspberry Pi.

Mike

All of my development work was done using a stock install of Raspbian
buster on a Pi3 and Pi4.

I think the ansible playlists should cover everything that is needed -
as I think the only new requirement is libdrm - which should now be in
the playlists where appropriate.

You shouldn't need to enable/disable anything when running ./configure
- though you may want to post the output from your first run, as there
is no doubt something I've forgotten:) And of course disabling
anything you don't need will speed up compile times.

There are some issues - but first some background...

One of the main goals of the render branch was to give consistent,
performant and accurate video display across different platforms. So
there is now one 'display' pathway that always uses OpenGL/ES for
rendering.

One of the big problems with OpenGL on the Pi is how Qt is configured.
A stock raspbian Qt build is built for the open source OpenGL drivers.
If the closed source, broadcom driver is in use, Qt OpenGL
applications will not work. Likewise, if you tortured yourself enough
to build your own Qt with Broadcom support, trying to launch a Qt app
using the opensource driver will fail. This has implications for
rendering video with OpenGL.

One of the first casualties of taking the OpenGL route was openmax
support. It is/was effectively just a wrapper around MMAL, I didn't
have the energy to modify our code to work with OpenGLES, it is
effectively dead in the water and MMAL and V4L2 codecs have all the
focus upstream. Openmax (and MMAL) will never get HEVC support but it
should arrive for V4L2 at some point. So any build scripts need to be
updated to remove any openmax options.

So I focused on MMAL and V4L2 codecs, which are both supported
reasonably well in FFmpeg.

The FFmpeg upstream code for both MMAL and V4L2 will return video
buffers to 'main' memory when decoding - not that there is a real
distinction between CPU and GPU memory on the Pi - but it does mean
there are multiple extra memory transfers for both decoders. This is
what the v4l2 and MMAL 'decode only' decoders do. They both work - but
are slow - and MMAL will still work in this mode regardless of the
OpenGL driver in use. V4L2 'decode only' will only work, I think (it's
been a while!), with the open source, VC4 driver.

So I patched our version of FFmpeg to use 'zero copy' buffers for both
MMAL and V4L2. For V4L2 this means buffers are allocated as DRM-PRIME
memory which can then be directly mapped to EGL images/OpenGL textures
and displayed without any copies. The MMAL pathway is similar. These
are the 'direct rendering' decoders. Unfortunately, Broadcom uses a
'custom' implementation for mapping video memory to textures, so if
the open source driver is in use, direct rendering of MMAL video is
not available. Likewise, the closed source driver will not work with
V4L2 buffers.

In summary - a default Raspbian build should give you decoder options
of MMAL (decode only), V4L2 and V4L2 (decode only). The supported
codecs are different between the Pi3 and Pi4.

So to the issues, in no particular order:-

- the VC4 V4L2 driver does not pass through the interlacing flags when
decoding. So automatic deinterlacing just doesn't work - and you have
to manually enable deinterlacing through the OSD menu (other V4L2
drivers do handle this - it's just the Pi code that doesn't)

- performance is not good enough. I would expect the Pi4 to
decode/display a standard h264 1080i stream without issue. But it
struggles - especially when deinterlacing (hint: only enable basic
deinterlacing). CPU load is low, so I'm assuming there is a bottleneck
somewhere. I have a hunch that trying to display anything close to the
display refresh rate hits issues - which would point to Qt.

- performance can be improved by running mythfrontend without X. Boot
to the console and run with 'QT_QPA_PLATFORM=eglfs mythfrontend'. This
generally is much smoother - though still far from perfect - but has
issues of its own. There is a bug in Qt that causes it to crash on
exit and if you are not remotely logged in, you will lose the console.
This is, I think, fixed in later versions of Qt. Keypresses also have
a nasty habit of being passed to the console as well - which can cause
interesting behaviour when you launch a second version of
mythfrontend:) and there is a problem with blank screens when exiting
playback which is 'resolved' when you press a key. Not sure what
causes it but I think something to do with how we disable rendering
for plugins etc. and... there is no display mode switching, but I
added some Pi specific stuff which will give you frame rate switching
at least (it disallows resolution switching as it directly undercuts
the Qt platform plugin, which has no idea that it needs to create a
new framebuffer).

- the Pi3 only has OpenGL ES2.0 - which doesn't support lossless
rendering of 10bit video. Obviously probably not a real world issue -
but it grates : )

- I can't get the Pi4 to switch to 4k60p for love nor money, though I
did make some changes so the texture caching in libmythui scales to
the size of the display - which speeds up a 4k GUI considerably.

Other than that - cec works well out of the box:)

In terms of going forward, there is talk of a new V4L2 stateless
decoder for HEVC - though I suspect it is some way off. There is a
25,000line patch for FFmpeg to accelerate HEVC on the Pi3  - which I
passed on:) - and there is an FFmpeg patch from the Pi foundation for
HEVC acceleration on the Pi4 - but it is a work in progress and has no
mechanism to integrate with OpenGL.

MMAL support could probably be vastly improved by using the MMAL
presentation API but I haven't had the time to look at it, and I'm not
sure how it would integrate with OpenGL rendering. At the very least
it could be used for deinterlacing.

The V4L2 direct rendering should be much faster with DRM rendering.
That is something I'm going to look at generically (it should work
with other devices as well as desktop machines) but it requires a
custom Qt platform plugin - which is a big ticket item for various
reasons.

Any questions, just ask.
Regards
Mark


More information about the mythtv-dev mailing list