[mythtv] YV12 problem

David Engel david at istwok.net
Thu Dec 13 03:48:58 UTC 2018


On Wed, Dec 12, 2018 at 06:54:32PM -0500, Peter Bennett wrote:
> On 12/12/18 5:49 PM, Jean-Yves Avenard wrote:
> > TV broadcasts use exclusively mpeg2, h264 and h265.
> > Most embedded system like the fire2 have a hardware decoder for those.
> > 
> > So what you get out of the decoder will be a GPU based nv12 image.
> > 
> > For other codecs like say vp8, vp9, av1: you have to use a software
> > decoder and they will output yuv420 (if 8 bits).
> > 
> > I was just saying earlier that dropping yuv420, means you'll have to do
> > a conversion to nv12 right outside the decoder, so an extra memory
> > allocation and unnecessary copy.
> > 
> > All when converting any nv12 shader to do yv12 is trivial; you could
> > even use the same code for both.
> > 
> Jya - some quick notes on what i have been up to -
> 
> I have added MythTV code to decode using mediacodec via FFmpeg, also new
> code to support vaapi with deinterlacing (called vaapi2 in MythTV) and I am
> working on nvdec. However, I need to implement direct output from decoder to
> video. Currently for all of those I have added it is decoding to memory and
> then using the existing MythTV OpenGL to render. This is not fast enough for
> 4K video. I will have to learn how to do the direct output from decode to
> OpenGL.

Have you looked at videoout_openglvaapi.cpp yet?  What Mark described
mostly(*) made sense and seems like the way forward.  Something very
similar to that seems like the way to go.  Configure vaapi to decode
the frames into opengl memory.  If hardware deiterlacing is chosen, it
gets done during decoding and simply display the resulting progressive
frames.  If opengl deinterlacing is chosen, don't deinterlace during
decoding and do so in opengl if needed.  The only loss is the ability
to use the software deinterlacers which really isn't a loss in my
opinion.

(*)I don't think Mark fully grasped that the deinterlacing could be
done automatically during decoding.  Either that or he knows about
some other opengl relationship to vaapi of which I'm unaware.

> One problem with mediacodec decoding is that in most devices it does not do
> deinterlacing and it does not pass MythTV the indicator to say video is
> interlaced. This forces me to use software decoding for mpeg2 so that we can
> detect the interlace and use the OpenGL deinterlacer.

I thought it did give us an indication we just couldn't know before
hand until we actually tried it.  It it doesn't deinterlace what do we
get back when we give it 1 interlaced frame?  We either get back 1
frame or 2, right?  Oh, are you talking about non-double rate
deinterlacing?  Do we know if the frame is interlaced going in?  If
so, seems like a job for YAFS (yet another fine setting) for the user
to tell us what to assume.

David

> On some devices (e.g. fire stick g2), the MythTV OpenGL implementation is
> not fast enough to display 30 fps, so we are dropping frames. I believe that
> the OpenGL processing we use is too much, causing the slowdown. I believe we
> need a lightweight OpenGL render that renders the images without all the
> filters we normally use. The decoding part of it seems to be fast enough,
> audio and video sync nicely, just the video is jerky becuase of the dropped
> frames.
> 
> I need to spend some time learning OpenGL so that I can figure this all out.
> 
> Any help or advice would be welcome.
> 
> Peter
> _______________________________________________
> mythtv-dev mailing list
> mythtv-dev at mythtv.org
> http://lists.mythtv.org/mailman/listinfo/mythtv-dev
> http://wiki.mythtv.org/Mailing_List_etiquette
> MythTV Forums: https://forum.mythtv.org

-- 
David Engel
david at istwok.net


More information about the mythtv-dev mailing list