[mythtv-users] Very large logs seen in mythconverg database

Michael T. Dean mtdean at thirdcontact.com
Tue Feb 19 22:46:36 UTC 2013


On 02/19/2013 04:15 PM, Dan Bernard wrote:
> On 02/19/2013 01:12 PM, Dan Bernard wrote:
>>> I am running mythtv 0.25.2, and have had it running for maybe 6 months with
>>> up to 15 recordings per week, and not using any schedules (all manual), on
>>> a single tuner (hdhomerun).  I only keep recordings for 10 days, so around
>>> 30 show up in the list normally.
>>>
>>> I noticed that my available hard drive space had dropped somewhat, so I
>>> checked the mysql/mythconverg database to see how much space that was
>>> taking up.  Here are the largest 4 files:
>>> -rw-rw---- 1 mysql mysql 3.1M Feb 18 23:35 recordedseek.MYD
>>> -rw-rw---- 1 mysql mysql 3.1M Feb 18 23:35 recordedseek.MYI
>>> -rw-rw---- 1 mysql mysql 621M Feb 19 11:53 logging.MYI
>>> -rw-rw---- 1 mysql mysql 1.7G Feb 19 11:53 logging.MYD
>>>
>>> Why are those two logging files taking up over 2GB?  I'm not recording a
>>> lot of shows, so I'm surprised at the large size.  I have a very vanilla
>>> install on Linux Mint 13.  Maybe that's normal, but I wanted to ask if
>>> anybody considers that strange behavior..  I can't imagine that keeping
>>> >  2GB of logs is necessary.
>>>
>>> Thanks for your help!
>> Please post the output of the following commands (which you should be
>> able to paste into the system shell--not directly into a mysql terminal):
>> mysql -umythtv -p mythconverg -e 'SHOW CREATE TABLE logging\G'
>> mysql -umythtv -p mythconverg -e 'SHOW TABLE STATUS LIKE 'logging'\G'
>> mysql -umythtv -p mythconverg -e 'SELECT COUNT(*) FROM logging\G'
>> mysql -umythtv -p mythconverg -e 'SELECT MAX(id) FROM logging\G'
>> I'm pretty sure I know what's going on, but this will allow me to
>> confirm the theory.
> Thanks for your response.  Here is the output from those commands (must be
> a syntax problem with the 2nd one):

Thanks.

> $ mysql -umythtv -p mythconverg -e 'SHOW CREATE TABLE logging\G'
> Enter password:
> *************************** 1. row ***************************
>         Table: logging
> Create Table: CREATE TABLE `logging` (
>    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
>    `host` varchar(64) NOT NULL DEFAULT '',
>    `application` varchar(64) NOT NULL DEFAULT '',
>    `pid` int(11) NOT NULL DEFAULT '0',
>    `tid` int(11) NOT NULL DEFAULT '0',
>    `thread` varchar(64) NOT NULL DEFAULT '',
>    `filename` varchar(255) NOT NULL DEFAULT '',
>    `line` int(11) NOT NULL DEFAULT '0',
>    `function` varchar(255) NOT NULL DEFAULT '',
>    `msgtime` datetime NOT NULL,
>    `level` int(11) NOT NULL DEFAULT '0',
>    `message` varchar(2048) NOT NULL,
>    PRIMARY KEY (`id`),
>    KEY `host` (`host`,`application`,`pid`,`msgtime`),
>    KEY `msgtime` (`msgtime`),
>    KEY `level` (`level`)
> ) ENGINE=MyISAM AUTO_INCREMENT=20518438 DEFAULT CHARSET=utf8

OK, starting with this one, what we see here is good.  Your table is 
using the MyISAM storage engine--which is the only supported storage 
engine for MythTV, right now.  If using a current version of MythTV 
(0.25 or higher) to create your database schema, MythTV will use the 
MyISAM engine, even if your database is configured to use the InnoDB 
engine by default (as on current MySQL).  Therefore, there's no problem 
for you.

However, those users who are using the InnoDB storage engine can /only/ 
reduce the size of their logging (or any other) table by, for example, 
dropping the database and restoring a backup or doing hacks like 
altering the storage engine to MyISAM, then (if you insist on having a 
broken/unsupported MythTV schema) back to InnoDB.  Those users who are 
using InnoDB engine should back up their database ( 
http://www.mythtv.org/wiki/Database_Backup_and_Restore ), then do a 
partial restore of the database backup ( 
http://www.mythtv.org/wiki/Database_Backup_and_Restore#Partial_restore_of_a_backup 
) to fix the corruption (after which they'll need to reconfigure 
everything).

> $ mysql -umythtv -p mythconverg -e 'SHOW TABLE STATUS LIKE 'logging'\G'
> Enter password:
> ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check
> the manual that corresponds to your MySQL server version for the right
> syntax to use near 'logging' at line 1

Oops, that should have been:

mysql -umythtv -p mythconverg -e 'SHOW TABLE STATUS LIKE "logging"\G'

(quote issues).  I was hoping to get this output because it would 
show--in addition to repeating the storage engine information--some size 
information...  Something to the effect of:

$ mysql -umythtv -p mythconverg -e 'SHOW TABLE STATUS LIKE "logging"\G'
*************************** 1. row ***************************
            Name: logging
          Engine: MyISAM
         Version: 10
      Row_format: Fixed
            Rows: 16952465
  Avg_row_length: 25
     Data_length: 423811625
Max_data_length: 7036874417766399
    Index_length: 387863552
       Data_free: 0
  Auto_increment: NULL
     Create_time: 2012-03-29 18:42:37
     Update_time: 2013-02-19 17:20:03
      Check_time: 2013-02-19 04:39:12
       Collation: utf8_general_ci
        Checksum: NULL
  Create_options:
         Comment:
1 row in set (0.00 sec)

where the Data_length shows the size of the data file (the .MYD) on disk 
and the Index_length shows the size of the index file (the .MYI) on 
disk.  Obviously, yours would show different information.

Note, also, that my output shows a Check_time of early this morning.  
That corresponds to the time when my system runs a daily 
optimize_mythdb.pl cron job--a time I chose because of the extreme 
unlikelihood of a recording occurring at that time.

> $ mysql -umythtv -p mythconverg -e 'SELECT COUNT(*) FROM logging\G'
> Enter password:
> *************************** 1. row ***************************
> COUNT(*): 28369

OK, this is also good news.  It shows that the housekeeper is 
successfully cleaning up your logging table--so there's very little 
actual data in there.  That means almost all of the space taken up by 
the file and index is "old" data that has been deleted (so it will soon 
be gone).

> $ mysql -umythtv -p mythconverg -e 'SELECT MAX(id) FROM logging\G'
> Enter password:
> *************************** 1. row ***************************
> MAX(id): 20518438
>

And this just shows that in the course of running your system, you've 
had 20M rows/log messages written to the table.  This is primarily just 
something I wondered--i.e. your 1.7GB file seems to correspond to 20M 
rows or about 90B/row.  It's actually not useful for diagnosing the 
issue, but it will allow us to see that your data file should only be 
around 2.4MiB after you shrink it.

So, you--and anyone else in the same situation:

1) Are using the MyISAM storage engine for logging.
2) Have a reasonable number of rows in logging (showing that housekeeper 
is still cleaning it up)
3) Have an old Check_time (I'm guessing you do)

i.e. their schema is in good shape--simply need to do a little database 
maintenance.

The recommended way to do so is to run the optimize_mythdb.pl script.  
It's available at:

https://github.com/MythTV/mythtv/raw/master/mythtv/contrib/maintenance/optimize_mythdb.pl

but should have also been installed by your packager.  Once you have 
that script, you can run it when MythTV is extremely idle.

What exactly do I mean by "extremely idle"?  Ideally it means MythTV is 
not using the database at all--none of the MythTV applications are doing 
anything (including mythbackend, mythcommflag, mythtranscode, 
mythpreviewgen, mythfrontend, ...).  In practice, you'll probably be OK 
if something is actually occurring, though running optimize_mythdb.pl 
when a recording is occurring could easily cause recording 
problems--including broken recording files.  This is why I chose to run 
a cron job at 4:38am (it takes about a minute to run on my database, 
thus the minute-later check time).

I run a daily cron job, but that's likely overkill--unless, of course, 
you hard crash your system or run out of disk space on the file system 
with the MySQL data files frequently.  Even though I don't, it's always 
nice to have a cron job taking care of things for me in case I miss a 
crash and don't run optimize_mythdb.pl.  However, if you're only running 
optimize_mythdb.pl for the purpose of cleaning up space usage, weekly or 
monthly jobs are probably fine.

So, why don't we just automatically run optimize_mythdb.pl (or its 
equivalent)?  Well, MythTV has all the code to do so (and it's actually 
available through the backend web server at http://<hostname>:6544/ 
--which using is another option, rather than optimize_mythdb.pl).  The 
reason we don't run it automatically, however, is because running 
optimize_mythdb.pl (or its equivalent) is dangerous--and can result in 
the complete loss of a table or tables or even the database.  In the 
event that your system or the MySQL server crashes when you're running 
optimize_mythdb.pl, it's *critical* that you run optimize_mythdb.pl (or 
its equivalent) before you restart any MythTV applications--i.e. before 
/anything/ accesses the mythconverg database (that's mythbackend (master 
or remote backends), frontends, mythcommflag, mythtranscode, 
mythpreviewgen, ...).  Technically, even running optimize_mythdb.pl in 
this situation is a bad idea--because it accesses the settings table 
before doing optimization (so you could, in theory, lose the settings 
table due to running it).  The best thing to do is to manually run a 
mysqlcheck on the database if the system or mysql server crashes when 
running optimize_mythdb.pl.

So, if it's so dangerous, why do I run it on a daily cron job?  Well, 
because the chance of a critical failure is small (well, it was for me 
until I just said that--tempting fate), and because I have a database 
backup plan in place (meaning I'll have a relatively recent backup I 
could restore if things go south), I've decided that the convenience is 
worth the risk.  If you don't agree, feel free to manually run 
optimize_mythdb.pl every once in a while--and you may even want to shut 
down all MythTV applications when doing so--so that you can notice and 
fix any issues if you have a crash.

So, after running optimize_mythdb.pl, I have a feeling you'll see a 
*much* smaller data and index file.

Note, too, that you should run optimize_mythdb.pl, since there's an 
additional command--besides the one mentioned earlier in the 
thread--necessary to properly shrink the index file.

Mike


More information about the mythtv-users mailing list