[mythtv-users] Playback stoppy and go-y. Ie super-duper-uber-jittery.

Tony Lill ajlill at ajlc.waterloo.on.ca
Wed Mar 29 18:02:07 UTC 2006


Greg Stark <gsstark at mit.edu> writes:

> Trey Boudreau <trey at treysoft.com> writes:
>
>> It *feels* like the frontend can't both suck down the next few frames and
>> render the current batch at the same time. 'top' indicates 80% idle time.
>> Unfortunately I don't have another machine with enough CPU to run a second
>> HD frontend to compare against.
>
> I tried recording something and playing it back instead of live tv and had the
> same thing happen. My machine also is ~ 80% idle when this is happening.
>
> The message about "prebuffering paus" consistently gets printed just as the
> playback resumes. It's like it's prebuffering a bunch of frames for a while,
> then playing a bunch of frames for a while, then prebuffering, then playing.
> Without allowing any context switches to continue playing while prebuffering
> is happening.
>
> What it feels like to me is some sort of thread scheduling misunderstanding.
> Something like the issues recently with sched_yield and OpenOffice where a
> change in semantics made OpenOffice feel unresponsive. Or like the various
> issues java implementations have had with scheduling on different platforms.
>
> One thing that would cause exactly this behaviour would be if the prebuffering
> thread was a higher priority than the playback thread.
>
> I've just started looking at the code. There's no chance the playback engine
> is a module that can be replaced with a different engine. Perhaps a low
> overhead non-threaded implementation based on mplayer?

I rather doubt it. I keep thinking of something that goes under tomato
sauce and meatballs when I look at the code....

I've found that the 2.6 series of kernels doesn't seem to share
resources as well as the 2.4 did. 

These tuning changes helped with my playback problems, except when I'm
playing back from an NFS mounted share and I'm doing a lot of file i/o
to that share as well. 

Basicly, if you have a lot of memory, then you can have a lot of dirty
buffers pile up to get flushed at one go. When this happens, your disk
reads get starved until the flush is done. These tuning variables make
the flusher fun more often, but with smaller chunks. This works for 1G
of memory, if you have more, use a smaller value for
dirty_background_ratio.

Anyway, it might help.

# % of physical memory after which a process doing a write will be paused
# untill dirty pages have been written
# default 40
echo 50 > /proc/sys/vm/dirty_ratio

# % of physical memory at which pdflush will be woken. Hopefull this will limit
# system pauses for flushes
# default 10
echo 5 > /proc/sys/vm/dirty_background_ratio

echo cfq > /sys/block/hda/queue/scheduler
echo cfq > /sys/block/hdc/queue/scheduler
echo cfq > /sys/block/hdd/queue/scheduler


More information about the mythtv-users mailing list