[mythtv-users] Mythtv 0.24 Acer Revo ALSA WriteAudio buffer underruns
pnetherwood at email.com
pnetherwood at email.com
Sat Mar 5 17:35:17 UTC 2011
I still get the same problem in that mysqld goes to 100% CPU usage when a recording starts/stops. I think its a bug in the backend (since 0.24) and I'll try and explain why in this email.
In my slow query log there is a query that takes over 3 seconds which occurs at same time as the frontend pause and mysqld CPU hogging. I've done alot of mysql fine tuning to make sure everything is cached in memory as much as possible but that doesn't help.
I've modified the query so that it it can by run stand alone (because the original creates temporary tables first). See below:
SELECT c.chanid, c.sourceid, p.starttime, p.endtime, p.title, p.subtitle, p.description, c.channum, c.callsign, c.name, oldrecduplicate, p.category, record.recpriority, record.dupin, recduplicate, findduplicate, record.type, record.recordid, p.starttime - INTERVAL record.startoffset minute AS recstartts, p.endtime + INTERVAL record.endoffset minute AS recendts, p.previouslyshown, record.recgroup, record.dupmethod, c.commmethod, capturecard.cardid, cardinput.cardinputid,p.seriesid,
p.programid, p.category_type, p.airdate, p.stars, p.originalairdate, record.inactive, record.parentid, (CASE record.type WHEN 6 THEN record.findid WHEN 9 THEN to_days(date_sub(p.starttime, interval time_format(record.findtime, '%H:%i') hour_minute)) WHEN 10 THEN floor((to_days(date_sub(p.starttime, interval time_format(record.findtime, '%H:%i') hour_minute)) - record.findday)/7) * 7 + record.findday WHEN 7 THEN record.findid ELSE 0 END) , record.playgroup, oldrecstatus.recstatus, oldrecstatus.reactivate, p.videoprop+0, p.subtitletypes+0, p.audioprop+0, record.storagegroup, capturecard.hostname, recordmatch.oldrecstatus, record.avg_delay, c.recpriority + cardinput.recpriority + (cardinput.cardinputid = record.prefinput) * 2 AS powerpriority
FROM recordmatch
INNER JOIN record ON (recordmatch.recordid = record.recordid)
INNER JOIN program AS p ON ( recordmatch.chanid = p.chanid AND recordmatch.starttime = p.starttime AND recordmatch.manualid = p.manualid )
INNER JOIN channel AS c ON ( c.chanid = p.chanid )
INNER JOIN cardinput ON (c.sourceid = cardinput.sourceid)
INNER JOIN capturecard ON (capturecard.cardid = cardinput.cardid) LEFT JOIN oldrecorded as oldrecstatus ON ( oldrecstatus.station = c.callsign AND oldrecstatus.starttime = p.starttime AND oldrecstatus.title = p.title )
WHERE p.endtime >= NOW() - INTERVAL 1 DAY ORDER BY record.recordid DESC;
I'd be very interested how long the query takes to run on other peoples machines. Mine takes on average over 3 seconds. When I do an explain plan on it you can see its doing a full table scan 'cardinput' and 'recordmatch'. The query is joining on cardinput using sourceid which is not part of the primary key and also has no index. The query is also joining on 'chanid', 'starttime' and 'manualid' in 'recordmatch' which are not part of the primary key or on an index.
+----+-------------+--------------+--------+---------------------------------------------------------+---------+---------+---------------------------------------------------------------------------------------------------+------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------+--------+---------------------------------------------------------+---------+---------+---------------------------------------------------------------------------------------------------+------+---------------------------------+
| 1 | SIMPLE | cardinput | ALL | NULL | NULL | NULL | NULL | 8 | Using temporary; Using filesort |
| 1 | SIMPLE | capturecard | eq_ref | PRIMARY | PRIMARY | 4 | mythconverg.cardinput.cardid | 1 | |
| 1 | SIMPLE | recordmatch | ALL | recordid | NULL | NULL | NULL | 1220 | Using join buffer |
| 1 | SIMPLE | record | eq_ref | PRIMARY | PRIMARY | 4 | mythconverg.recordmatch.recordid | 1 | |
| 1 | SIMPLE | c | eq_ref | PRIMARY,sourceid | PRIMARY | 4 | mythconverg.recordmatch.chanid | 1 | Using where |
| 1 | SIMPLE | p | eq_ref | PRIMARY,endtime,id_start_end,program_manualid,starttime | PRIMARY | 16 | mythconverg.recordmatch.chanid,mythconverg.recordmatch.starttime,mythconverg.recordmatch.manualid | 1 | Using where |
| 1 | SIMPLE | oldrecstatus | eq_ref | PRIMARY,title | PRIMARY | 456 | mythconverg.c.callsign,mythconverg.recordmatch.starttime,mythconverg.p.title | 1 | |
+----+-------------+--------------+--------+---------------------------------------------------------+---------+---------+---------------------------------------------------------------------------------------------------+------+---------------------------------+
I do have 8 tv tuner cards (4 with multirec) which will make this query run slower than someone who only has a few.
It seems likely to me that this query is locking up the main mythbackend thread causing it to pause delivering data to the front end which is why we see the buffer underruns. The frontends are being starved of data while the backend is stuck doing this query. It seems that there is likely to be a thread locking issue coupled with a slow query. It may be that the long duration of the query exacerbates a thread locking issue which is not normally apparent when the query runs quicker.
This behavior is since 0.24. I've had to rebuild my current machine so I've had the same problem on two completely different backend machines (same frontends) and I've had to rebuild the database so I know its not corruption.
I think this is a bug in the backend. I'd like to report it as a bug but I'm hoping that one of the developers can have a quick look and see if I'm on the right track and hopefully submit a more focused bug report.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.mythtv.org/pipermail/mythtv-users/attachments/20110305/257a4db5/attachment.html
More information about the mythtv-users
mailing list