[mythtv] [mythtv-commits] Ticket #2782: automatic simultaneous jobs scaling

osma ahvenlampi oa at iki.fi
Fri Dec 8 14:42:05 UTC 2006


On 12/8/06, Daniel Kristjansson <danielk at cuymedia.net> wrote:
> Simple math, lets say you have 4 ATSC recorders which are
> recording, and 3 frontends are watching 3 other streams off
> your disk, and your disk read/write speed is 21 MB/s. Since
> each stream is about 3 MB/s, adding another frontend process
> or starting a commercial flagging or transcode process would
> bring us over the edge.

This is why I referred to CFQ scheduling. With CFQ enabled and the
jobs being niced, they won't be getting a whole lot of I/O bandwidth
in this situation, so I really doubt that will be the thing taking you
over the edge -- it'll be the too-many-recordings-and-playbacks alone
that would be the real contributors, as you said (in the part I
clipped).

The right way to address that situation is to create clear system
builders' guidelines for balancing tuners+clients vs spindles, and I
suppose it would be possible to enforce or warn users of imbalanced
systems - if so, that functionality would make sense in mythtv-setup
in my opinion.

Anyway, I think my patch would avoid starting new jobs in this
situation since before the recordings would starve, you should already
see enough iowait to take idle time below 25%, the threshold I used
for deciding whether to start a new job. It would not help if the job
was already running (and CFQ not doing its job) and it was a new
recording or playback starting that would cause I/O starvation, but
that wasn't something I was trying to address in the first place -- I
suppose if CFQ won't be enough, it might be possible for the jobqueue
to pause running jobs then, but personally I think that would effort
spent in the wrong place.

> Even though I run my commercial flag processes at the slowest
> setting and they use only a 1-3% of CPU I do sometimes have to
> cancel them for playback to work well. So CPU usage is not a

Are you sure you've enabled the CFQ I/O elevator? It's the default in
Red Hat and Fedora kernels, but I think some others may be defaulting
to no elevator at all or the AS elevator, and those would indeed let a
niced process I/O starve a non-niced one. As a bonus, according to Red
Hat's benchmarks CFQ is also the highest-performance all-purpose
elevator, but I'm particularly fond of it due to its ability to
consider process nice value in scheduling decisions.

You can check from /sys/block/hda/queue/scheduler, and turn it on by
adding elevator=cfq to the kernel boot parameters.

-- 
Osma Ahvenlampi   <oa at iki.fi>       http://www.fishpool.org


More information about the mythtv-dev mailing list