[mythtv] YV12 problem

Jean-Yves Avenard jyavenard at gmail.com
Thu Dec 13 10:36:31 UTC 2018

On Thu, 13 Dec 2018 at 00:54, Peter Bennett <pb.mythtv at gmail.com> wrote:
> I have added MythTV code to decode using mediacodec via FFmpeg, also new
> code to support vaapi with deinterlacing (called vaapi2 in MythTV) and I
> am working on nvdec. However, I need to implement direct output from
> decoder to video. Currently for all of those I have added it is decoding
> to memory and then using the existing MythTV OpenGL to render. This is
> not fast enough for 4K video. I will have to learn how to do the direct
> output from decode to OpenGL.

so in my experience, doing hardware decoding then readback is slower
(sometimes much slower) then plain software decoding.
The readback is slow. It's not too bad with intel as the memory is
shared, but for nvidia (vdpau/nvdec) or amd (vaapi) it's terrible.

What I don't get however, is that a vaapi surface is in effect just
like a ogl one can be used as-is. Why can't you use the OGL compositor
there? that's what the older vaapi decoder was doing. You get an OGL
Image out.

> One problem with mediacodec decoding is that in most devices it does not
> do deinterlacing and it does not pass MythTV the indicator to say video
> is interlaced. This forces me to use software decoding for mpeg2 so that
> we can detect the interlace and use the OpenGL deinterlacer.

You should be able to determine if the content is interlaced or not
without decoding, not sure with mpeg2, but for h264 and hevc you
certainly can it's in the frame/stream header (in the SPS NAL for
h264). Other codecs like vp8, vp9 and av1 do not support interlacing
thank god that prehistoric thing will disappear.

> On some devices (e.g. fire stick g2), the MythTV OpenGL implementation
> is not fast enough to display 30 fps, so we are dropping frames. I
> believe that the OpenGL processing we use is too much, causing the
> slowdown. I believe we need a lightweight OpenGL render that renders the
> images without all the filters we normally use. The decoding part of it
> seems to be fast enough, audio and video sync nicely, just the video is
> jerky becuase of the dropped frames.

if you are doing a readback, this is where your slowless comes from,
almost guaranteed.
We do 1080p 60fps on the firestick 2 just fine with GeckoView. But we
do no readbacks at all. It's GPU hardware all the way with OGL
compositing. On android it's even easier as there's direct support for
NV12 surface.

> I need to spend some time learning OpenGL so that I can figure this all out.

OGL is on the way out, I assume here you want EGL which can then
interface with OpenGL IS. That's what you use with the OpenMax
decoder. With Android what you get is a graphic surface directly, with
an opaque shared handle that the android gfx can handle directly.

More information about the mythtv-dev mailing list