[mythtv] Playback next steps

Mark Kendall mark.kendall at gmail.com
Mon Dec 17 11:21:49 UTC 2018


> --- On Fri, 14/12/18, David Engel <david at istwok.net> wrote:
>
> > I didn't
> > realize mythavtest was rotting.  I thought it was a
> > "simpler"
> > wrapper for
> > tv_play/mythplayer without the rest of the frontend.  Is
> > that not the case?
>

No - it isn't rotting, I just extended it a little - for double rate
deinterlacing and gpu decoder usage.
I wrote mythavtest to just play video at the fastest possible rate (using
the current display profile) with no audio and no a/v sync. You need to
turn off any sync to vblank that the driver may be using. When using
opengl, there are also a number of environment variables that you can set
to disable various functionality to test different code paths.


> > There are some platforms like the second
> > generation firetv stick and
> > new, mi box,
> > that are limited in some way such that uyvy doesn't
> > work
> > acceptably.  YV12 currently works on
> > them for some definitions of
> > work.
>

I suspect it is either the extra CPU processing or the extra framebuffer
stage that is currently needed for UYVY.

On Fri, 14/12/18, Peter Bennett <pb.mythtv at gmail.com> wrote:
>
Linear blend looks fine to
>  me with YV12. That is what I use. Maybe I am
>  missing something.
>  One thing to
>  note - using kernel on some android devices has a
>  performance impact, it starts dropping frames -
>  that does not happen
>  with linear blend.
>

YV12 linear blend is currently not double rate at all - so presumably you
are only running single rate?
As discussed later, kernel is much more GPU intensive - so not surprising
that it starts to drop frames.


> It sounds useful to me. At one point I
>  temporarily added some debug code
>  to print
>  the OpenGL shader code used. The MythTV source applies many
>
>  dynamic changes to the shader code before
>  sending it to OpenGL, and it
>  was difficult
>  for me to know what the code that actually ran looked
>  like.
>

The full and final shader code is already dumped to the logs with debug
level logging.

His code originally was for full screen OpenGL ES
>  and required a customized QT
>  build. Some
>  people did not like the softblend so I did some strange
>  stuff with the OpenGL ES OSD. What we have now
>  is x11 based QT
>  displaying the GUI in a X11
>  window, OpenMAX video displaying the
>  playback using the full-screen OpenMAX API, and
>  OpenGL ES displaying the
>  OSD using the
>  OpenGL ES full-screen API. For OpenMAX and OpenGL it does
>
>  some calculations to put the video and the
>  OSD into the correct place to
>  position it
>  on the QT window, to give the illusion that it is actually
>
>  all in a window.
>
>  In versions of Raspberry Pi Raspbian starting
>  from 2018, using the
>  OpenGL ES OSD causes
>  severe slowdowns in the video playback, even if
>  nothing is visible in the OSD, so it has become
>  useless and we reverted
>  to the softblend
>  OSD.
>
>  Piotr O has his own
>  build of Raspberry Pi Mythfrontend which uses
>  full-screen QT specially built for the purpose.
>  I don't know how that
>  all works.
>

I'm currently compiling master with an up to date raspbian stretch lite
with slight modifications to enable ES2.0 and QT5 opengl (both are
currently disabled if not an android build).

I noticed that the rasbpian qt5 is built with eglfs/egl support - will see
whether that actually means broadcom specific eglfs - I don't think it does
- in which case I'll cross compile with the correct configuration.
Qt5 eglfs should just work without issue (it works on my debian/mesa
desktop). There is then work to be done to use the OpenMax egl code when
using eglfs and use the 'regular' opengl osd. For x11, we should stick with
the current approach and disable the EGL OSD if running with eglfs.

I think there are also some fixes needed to the configure script. At the
moment I think it looks for gles2.0 support in the qt spec to enable
gles2.0 - but gles2.0 support appears to be available when egl is
available. Not sure of the correct permutations here - and the configure
script makes my brain melt.

I suspect that the best performance (and memory consumption etc) will only
be achieved if using straight eglfs - i.e. no X11. It's a shame no-one
provides a PPA for the right QT5.

>
> > software fallback - why bother unless you
>  have a modern CPU and a 15 year old GPU.
>  Some reasons for using software decode
>  - VDPAU has a bug with decoding MPEG2 that
>  results in pixellation on
>  many USA
>  stations.
>  - fire stick 2g mediacodec has a
>  bug where deinterlaced content causes
>  the
>  decoder to hang.
>  - Subtitles are not working
>  for some decoders. They work with software
>  decoding, for those people who need them.
>

To be clear, this is in reference to software fallback for YUV to RGB
conversion - this already assumes software decode and is not related to
hardware accelerated decode.

 Note there is an android problem with UYVY. In
>  some devices (e.g. fire
>  stick g2), OpenGL
>  ES does not support float precision highp and defaults
>  to mediump. The OpenGL code that applies the
>  color suffers a rounding
>  error and instead
>  of each pixel getting its correct color, each
>  alternate pixel gets the color for its neighbor
>  instead, on the right
>  hand half of the
>  screen. See https://imgur.com/dLoMUau and
>  https://imgur.com/lbfyEWQ . I
>  don't know why YV12 does not suffer from
>  that problem.
>

Yes - definitely a precision issue - but is it always exactly half way
across the screen or at different positions? If always exactly half way
regardless of source width, it sounds more like an 'off by one' error.

But the issue may be moot...

Since I wrote that summary, I've realised that both YV12 and and UYVY are
subject to the 'interlaced chroma upsampling' bug. I fixed this with the
default, pre-UYVY code and then obviously forgot about it:)

The fix for UYVY is pretty straightforward. Don't use FFmpeg for upsampling
(known issue with FFmpeg), use the Mythtv packing code and don't try and
pack two pixels into one sample. Actually simplifies the code and removes
the need for the extra filter stage. Would benefit from some NEON specific
code - then most platforms will have SIMD optimisations (I doubt there are
many platforms not covered by either MMX or NEON).

Not so sure about YV12. I think the shaders are trying to deal with it. But
it doesn't work and it is also resampling progressive content as well. If
the frame is sampled for interlaced chroma, it needs proper repacking
regardless of whether it is deinterlaced or not - and progressive content
needs 'regular' resampling/packing.

With the UYVY change in mind, I'd actually propose the following:-

Drop support for the software YUV to RGB case.
 - it only actually works with OpenGL1 as the only requirement for GL2/ES2
to work is shaders - which are mandatory per the spec.

Drop the opengl-lite option
  - if the UYVY code is changed, I can't see any benefit to the lite code.
The shaders are simple and using the apple/arb extensions for lite just
means you lose all colourspace, colour standard control.

This then leaves you with UYVY and YV12 - which I would change to
profiles(OpenGL-YV12 and OpenGL-UYVY) rather than settings per se. Makes
the interface cleaner (no settings) and you essentially have one profile
that is more CPU/memory transfer costly (UYVY) and one that is more GPU
costly (YV12). Users can then decide which works best for their system.

The 'extra filter stage' could no doubt go (if still needed) after UYVY
changes.

thoughts?

regards
Mark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mythtv.org/pipermail/mythtv-dev/attachments/20181217/9b110ad8/attachment.html>


More information about the mythtv-dev mailing list