[mythtv-users] Slow MySQL query after delete

f-myth-users at media.mit.edu f-myth-users at media.mit.edu
Tue Sep 4 06:54:35 UTC 2007


    > Date: Mon, 3 Sep 2007 16:18:43 -0700 (PDT)
    > From: crs23 <pvr at groundhog.pair.com>

    > jack snodgrass-2 wrote:
    > > 
    > > On 9/2/07, crs23 <pvr at groundhog.pair.com> wrote:
    > >>
    > > 
    > > .. I'm not sure I see the issues your having... you say that the query
    > > takes 35 - 40 seconds, but the one you show says:
    > > 
    > > "10 rows in set (0.02 sec)"
    > > 
    > > I can past in that long-ass query you posted and it takes 68 rows in set
    > > (0.02 sec) for me too.....
    > > 
    > > where / when / how exactly are you seeing thins query take 30+ seconds?

    > The query I gave was an 'EXPLAIN SELECT' not the actual 'SELECT', which is
    > why it was so fast -- it wasn't actually doing the select, just showing how
    > it would.  The actual SELECT occurs shortly after deleting a program or two. 
    > I turned on logging of slow queries in mysql and pulled the query from
    > there.  I don't know where or why the backend performs the query though it
    > shouldn't be hard to find.

I see the same behavior under 0.18.1---if I delete a few things in
succession from the FE, I can generally only delete 2-4 before the
backend gets busy and hangs me for most of a minute before I can
delete again.  This is on a database that's optimized very frequently,
with good hardware, and a filesystem that's got lots of free space.
It's not fragmentation per se, but MySQL having performance issues
(and it's not clear to me whether adding lots more RAM will help MySQL
or not, really).

What I'm assuming is going on---and what your enormous query is doing
---is that the scheduler is running on every deletion, trying to
decide if -this- deletion might allow it to record another episode
that it wouldn't otherwise have recorded---say, because you had a
limit on the number of recorded episodes allowed, or because you might
have done "delete and allow rerecord", etc.  In my case, I have none
of these, and I'd think that the entire scheduler query might be
optimized out in those cases, but it might be difficult for the
scheduler to know this without simply rerunning the query, or maybe
amending the query to say "if -none- of these conditions are true,
abort the query and don't rerun the scheduler" and then getting all
the right conditions in there.

Why does the query take so long?  Presumably because you have a lot
of "record on any channel" rules, and a large number of things in
oldrecorded, for starters.  The enormous amount of time for running
this query was discussed at length around early March of this year in
the context of fixing Myth not to hang the process that was reading
from ivtv while the scheduler was running---this caused massive
corruption and tens of seconds missing in recordings because the
GOP-writer couldn't write (hanging the entire thread) while it was
waiting on a lock held by the scheduler.  Patches were produced for
0.18 and also for 0.19 and 20/svn etc at that time which made the
behavior substantially better, but people still discussed why the
scheduler query was taking so long, in part because it has obvious bad
consequences for the UI, as you've seen.  (I haven't gone and dug up
the thread, but this is probably enough for someone else to find it
pretty fast; I was a participant in it if that helps.)

This is also why I tend to try -not- to have lots of "all channels"
rules (even though MT Dean tends to recommend them a lot), because
there was some evidence that they tended to make the scheduler queries
take even -longer-, and typically I know that most programs are
unlikely to surface on other channels---when I'm not sure, I use an
"all channels" rule anyway.

So how can you get around the deletion issue?  If you're running a
recent-enough release (I'm certainly not), you can probably move all
the recordings into a single playback? recording? group and then
delete them simultaneously; I know this has been discussed on the list
before.  That will cause only a single scheduler run after the mass
deletion, instead of forcing it to run after every single deletion.
It's what I'll be doing once I'm on a late-enough release, especially
since I have automation right now [written before Myth could do this]
that deletes in large groups and which has to wait 30 seconds (with
retrying) in between each one to let the scheduler catch up (otherwise
(a) it'll just wait anyway, and (b) the scheduler may crash if several
deletions are issued too fast---I hope this is only 0.18's non-thread-
safeness, but I'll see).  Using a later release will allow me to do
this in a minute for -all- deletions instead of a minute -per-
deletion, and should lead to a lower probability of glitching
a recording, too.


More information about the mythtv-users mailing list