[mythtv] YV12 problem
pb.mythtv at gmail.com
Thu Dec 13 16:46:41 UTC 2018
On 12/12/18 10:48 PM, David Engel wrote:
> On Wed, Dec 12, 2018 at 06:54:32PM -0500, Peter Bennett wrote:
>> On 12/12/18 5:49 PM, Jean-Yves Avenard wrote:
>>> TV broadcasts use exclusively mpeg2, h264 and h265.
>>> Most embedded system like the fire2 have a hardware decoder for those.
>>> So what you get out of the decoder will be a GPU based nv12 image.
>>> For other codecs like say vp8, vp9, av1: you have to use a software
>>> decoder and they will output yuv420 (if 8 bits).
>>> I was just saying earlier that dropping yuv420, means you'll have to do
>>> a conversion to nv12 right outside the decoder, so an extra memory
>>> allocation and unnecessary copy.
>>> All when converting any nv12 shader to do yv12 is trivial; you could
>>> even use the same code for both.
>> Jya - some quick notes on what i have been up to -
>> I have added MythTV code to decode using mediacodec via FFmpeg, also new
>> code to support vaapi with deinterlacing (called vaapi2 in MythTV) and I am
>> working on nvdec. However, I need to implement direct output from decoder to
>> video. Currently for all of those I have added it is decoding to memory and
>> then using the existing MythTV OpenGL to render. This is not fast enough for
>> 4K video. I will have to learn how to do the direct output from decode to
> Have you looked at videoout_openglvaapi.cpp yet? What Mark described
> mostly(*) made sense and seems like the way forward.
I looked at it briefly before I started the vaapi2 stuff. It is not easy
to understands and I figured it may be easier to start from scratch.
However to get the direct rendering I need to dig into it or look at
EGL, which is what JYA recommends.
> Something very
> similar to that seems like the way to go. Configure vaapi to decode
> the frames into opengl memory. If hardware deiterlacing is chosen, it
> gets done during decoding and simply display the resulting progressive
> frames. If opengl deinterlacing is chosen, don't deinterlace during
> decoding and do so in opengl if needed. The only loss is the ability
> to use the software deinterlacers which really isn't a loss in my
> (*)I don't think Mark fully grasped that the deinterlacing could be
> done automatically during decoding. Either that or he knows about
> some other opengl relationship to vaapi of which I'm unaware.
>> One problem with mediacodec decoding is that in most devices it does not do
>> deinterlacing and it does not pass MythTV the indicator to say video is
>> interlaced. This forces me to use software decoding for mpeg2 so that we can
>> detect the interlace and use the OpenGL deinterlacer.
> I thought it did give us an indication we just couldn't know before
> hand until we actually tried it. It it doesn't deinterlace what do we
> get back when we give it 1 interlaced frame? We either get back 1
> frame or 2, right?
We get back an interlaced frame, but the ffmpeg indicator that tells if
it is interlaced or not is "false" meaning it is not interlaced. That is
what normally turns on the deinterlacer in MythTV. I am not sure about
the bit format of the frame, from what I understand, interlaced frames
have two fields, arranged one after the other, like two half pictures. I
would have expected the output to be completely corrupted if the render
assumed it was one progressive frame when it was actually two interlaced
fields. There is something going on that I don't understand. Further
investigation is needed. Perhaps we can tell if it is interlaced.
> Oh, are you talking about non-double rate
> deinterlacing? Do we know if the frame is interlaced going in? If
> so, seems like a job for YAFS (yet another fine setting) for the user
> to tell us what to assume.
>> On some devices (e.g. fire stick g2), the MythTV OpenGL implementation is
>> not fast enough to display 30 fps, so we are dropping frames. I believe that
>> the OpenGL processing we use is too much, causing the slowdown. I believe we
>> need a lightweight OpenGL render that renders the images without all the
>> filters we normally use. The decoding part of it seems to be fast enough,
>> audio and video sync nicely, just the video is jerky becuase of the dropped
>> I need to spend some time learning OpenGL so that I can figure this all out.
>> Any help or advice would be welcome.
>> mythtv-dev mailing list
>> mythtv-dev at mythtv.org
>> MythTV Forums: https://forum.mythtv.org
More information about the mythtv-dev