[mythtv] 20180705 Android Observations

Mark Spieth mark at digivation.com.au
Fri Jul 6 22:54:05 UTC 2018


On 7/7/2018 6:15 AM, Peter Bennett wrote:
>
>
> On 07/06/2018 03:09 PM, David Engel wrote:
>> I always use a news program with scrolling ticker for this testing.
>> Where is the number of buffers defined?  I'll try a couple of things
>> (# of buffers and 4k vs. 1080p TV) to see if I can get 1080i to be as
>> smooth as 720p.
> Frame Buffers -
> libmythtv/videoout_opengl.cpp - VideoOutputOpenGL::CreateBuffers
> The normal values - vbuffers.Init(31, true, 1, 12, 4, 2);
> In my patch - vbuffers.Init(4, true, 1, 2, 2, 1);
>
> For the meaning of the parameters - videobuffers.cpp - VideoBuffers::Init
> see the comments above that method.
>
> This is a temporary fix. Changing it here may not be the eventual 
> solution because this changes it for every place OpenGL is used. Some 
> conditional logic may be needed to use different settings for 
> different cases.
>
> The default value of 31 buffers probably never considered the 
> possibility of 4K video, and also was probably set up 10 or 15 years 
> ago for lower powered machines than are currently used. The fact that 
> it works reasonably well with one eighth of the number of buffers is 
> interesting.
>
> I suspect that the reason for your jerkiness is that with 
> deinterlacing the 1080i video becomes 60 fps. Perhaps some logic 
> somewhere that increases the number of buffers depending on the fps 
> may be useful.
>
> Also significant - Frontend setup - Video - Playback - General 
> Playback - Audio Read Ahead. this controls buffers on the input side 
> and will make a difference, especially since changing the number of 
> Frame buffers affects the video pipeline. Try setting this higher to 
> see if it helps. I have it set at 300 ms on the shield. Increasing 
> this has less memory impact than increasing the frame buffers because 
> this is compressed data that is being buffered.
>
>>
>> You should try more testing.  I had several cases where a 1 hour
>> ff/rew passed without issue.  The problem is intermittent and
>> sometimes takes multiple attempts before it happens.
> Could you describe more fully what you see. You just said "playback 
> timeouts" during FF.
>>> I am wondering what to do about the buffers. There are currently 
>>> hardcoded
>>> numbers for each video output method (OpenGL, VDPAU, etc.). It is 
>>> fine on
>>> most linux systems to grab 300MB for buffers, but android tends to 
>>> kill the
>>> application if it thinks it is using too much memory. I think it 
>>> will need
>>> some way of dynamically setting the number of buffers based on 
>>> things like
>>> the amount of available memory, framerate, picture size, operating 
>>> system.
>> Most Linux systems probably have more memory and/or swap space to
>> better deal with low memory issues.
> Swap space - not good. As an experiment I tried playing a 4K video on 
> raspberry Pi. It immediately started swapping, the hard drive was 
> going crazy and everything froze. Swapping tends to kill response times.
>>> Another thing is there seems to be too much copying of frames from 
>>> one place
>>> in memory to another. For multi-megabyte frames that can take a 
>>> significant
>>> amount of time on a low end cpu.
>> I expect using Surface for rendering instead of OpenGL will fix that.
> I need to figure out how to do the surface stuff. It sounds like it 
> requires creating a new video output method along the lines of VDPAU. 
> (A lot of work).
>
> I am hoping the OpenGL can give us a good start. It may be possible to 
> reduce the amount of copying of frames that is being done. I will 
> investigate that.
I have the start of an android surface framework but it is currently 
disabled and doesn't even compile. I can share this if you like (patch).
I stopped looking at it as I got opengl working well enough and it 
should be roughly equivalent in performance for rendering purposes.

If we could render a frame directly to opengl video mem, that would be 
even better but I dont understand the structure (or opengl) well enough 
to even know where to start (yet).

Android qt also has the restriction that you only have 1 gl context that 
has to be shared between theme and video and I think that restricts 
things quite a bit.

Peter, you mentioned excessive unnecessary copying of frames. It would 
be good to anlayse this and optimise this first. I also have a gut 
feeling that the memory bandwidth has a big impact on performance e.g. 
DDR3 box vs DDR4. It would be nice to get a memory audit on the whole 
application. I suspect themes are also a big memory hog.
It could be s/w decoding would work for 4k with less frame duplication.

Also if there is something else gl could do algo-wise I have a bit of an 
understanding of gl code now having mucked around with it a bit.

Mark


More information about the mythtv-dev mailing list