[mythtv] [mythtv-commits] Ticket #2782: automatic simultaneous jobs scaling

Daniel Kristjansson danielk at cuymedia.net
Fri Dec 8 15:02:23 UTC 2006


On Fri, 2006-12-08 at 16:42 +0200, osma ahvenlampi wrote:
> Are you sure you've enabled the CFQ I/O elevator? It's the default in
> Red Hat and Fedora kernels, but I think some others may be defaulting
> to no elevator at all or the AS elevator, and those would indeed let a
> niced process I/O starve a non-niced one. As a bonus, according to Red
> Hat's benchmarks CFQ is also the highest-performance all-purpose
> elevator, but I'm particularly fond of it due to its ability to
> consider process nice value in scheduling decisions.
> 
> You can check from /sys/block/hda/queue/scheduler, and turn it on by
> adding elevator=cfq to the kernel boot parameters.

$ cat /sys/block/hda/queue/scheduler
noop anticipatory deadline [cfq]

No idea what this means. I guess I'm using cfq, is that correct?
I did try various schedulers the last time I looked at the
problem. They didn't make much of a difference, but I think
I remember the anticipatory one being worse than the others.
It's the seeks to the DB for writing the keyframe DB entries
which seems to throw the most randomness into the elevator.

If this is disk scheduler dependant how does this work on Macs,
the BSDs and MS Windows?

I would think that networking would still be a problem even if
the disk is taken care of. Though maybe this could be addressed
by only doing commercial flagging and transcoding on the file
server instead of on the myth backend and frontend machines.
As long as the file server isn't one of those file server
appliances you could do this, but you would lose access to
the CPUs on those machines you aren't doing jobs on and the
jobs might become CPU limited. (Backends with local disks
could be considered file servers for their own local
recordings.)

-- Daniel



More information about the mythtv-dev mailing list