[mythtv] [mythtv-commits] Ticket #2782: automatic simultaneous jobs scaling

Daniel Kristjansson danielk at cuymedia.net
Fri Dec 8 22:49:33 UTC 2006


On Fri, 2006-12-08 at 23:57 +0200, osma ahvenlampi wrote:
> On 12/8/06, Daniel Kristjansson <danielk at cuymedia.net> wrote:
> > It's the seeks to the DB for writing the keyframe DB entries
> > which seems to throw the most randomness into the elevator.
> 
> mysql is most likely running at the same nice value (0) as the backend
> writing the recordings. In this setting, it would most likely be the
> best approach to run recordings at (unnice) -5, mysql at 0, playback
> at (nice) +5 and transcodings and commflags at (nice) +10.

For most recorders people use the nice value of the backend and
mysql is the higher of the two since we need to write to both
the db and the filesystem when writing a file. I made this
same mistake before the 0.19 release when trying to maximise
the number of recorders you could use simultaneously with
a single disk to three. Lowering the recorder's niceness below
the mysql niceness has very little effect for MPEG recorders
because we need to write the keyframe rows to the DB once in
a while. (The NVP handles this much better, but fewer and fewer
people still use framegrabbers with the low cost of PVR-x50 and
digital cards these days.)

We also don't want playback to run at greater niceness than
recording. Playback has much higher real-time requirements.

> > If this is disk scheduler dependant how does this work on Macs,
> > the BSDs and MS Windows?
> If some OS can not balance its I/O well, that's their problem. Users
> will have to compensate by providing more I/O capacity. Or think of it
> this way -- Linux I/O scheduler can probably let the user get away
> with overcommitting their available I/O.

But the point of this patch is to make this scaling automatic.
If it in fact is just tuned for one person's system it doesn't
make much sense to apply it. If it can scale the number and
run speed of commercial flagging and transcoding processes for
multiple people then it becomes a very nice contribution.

> Networking can at least in theory be solved with the same principles
> using QoS settings on per-connection basis.

If you are volunteering to make MythTV configure QoS settings
for the 5-6 OS MythTV runs on you are a better man than I. :)

I think there must be a solution which doesn't require mucking
this much with all the different operating systems on which
MythTV runs.

> Although what's more
> likely to be a problem is that "network reads", which in many cases
> with Myth are in fact nfs/smb clients will cause disk access with the
> priority level of the nfs/smb daemon (and see above regarding nice
> levels..) -- to solve that either those daemons would have to be
> reprioritised or myth processes would in fact need to work as file
> servers (http on custom ports might do the trick, sort of like UPnP).

Classic priority inversion? I think the only way around this
without doing the scaling within MythTV would be to write a
MythTV disk elevator algorithm for the various operating systems
which gives transcoding and commercial flagging lower priority.

There has to be a better way.

> Or you could solve it the way Internet is usually solved -- brute
> force and more capacity than is going to be needed for the job at hand

But transcoding and commercial flagging are not real-time processes,
we should be able to run them when recording/playback is not
happening, or better yet use only the disk/cpu/network resources
that are currently going unused.

I would think that monitoring the buffer fill on recording and
playback processes would be a good enough metric to be control
the throttling of transcode & commercial flagging processes.
Something simple could probably be used to control the number
of transcode and commercial flagging processes we start as well.
Perhaps a simple count of the number of processes using various
resources and the number of processes using that resource which
caused problems in the past could be kept and used to limit the
number of processes we start up, with throttling kicking in when
an unexpected frontend connection arrives or a recording is
scheduled which begins before a running transcode or commercial
flagging process ends.

Ok, these two solutions aren't really simple when it comes to
filling in the blanks and engineering them into a working
system. But I think these two could address all the issues
no matter which of the big three bottlenecks are the problem.

-- Daniel



More information about the mythtv-dev mailing list