[mythtv-users] Better listings for Schedules Direct users. Free! (was Re: another scheduling strangeness/question)

Michael T. Dean mtdean at thirdcontact.com
Wed Sep 1 20:07:03 UTC 2010


  On 09/01/2010 11:45 AM, Ozzy Lash wrote:
> On Wed, Sep 1, 2010 at 1:16 AM, Michael T. Dean wrote:
>>   On 09/01/2010 01:39 AM, Ozzy Lash wrote:
>>> I just tried it on a newish AMD quad core system with 4 Gig of RAM
>>> thinking "I don't have a resource limited system".  While I was doing
>>> a single recording over firewire and simultaneous commercial flagging,
>>> the run took a little over 45 minutes (only about 2.5 downloading the
>>> data).  I feel so humbled!  I have 2 source, one for clear QAM which
>>> only has 50 channels or so (maybe less) and one for firewire which has
>>> a few hundred (actually it says 502 in the mythfilldatabase output).
>>> Clearing the data for source 1 for all 14 days took about 3 minutes,
>>> and clearing the data for the firewire source took about 30 minutes.
>>> Does this point to something I need to tune in my database setup?  I
>>> have some tweaks to my mysql settings that were suggested a long time
>>> ago on this list if you had a lot of memory (probably 2 gig back then)
>>>   Here they are:
>> It's possible that the slow DB update was primarily due to DB locking caused
>> by the fact that the database was in use while recording.  It's also
>> possible that lineup--550 channels--may just be asking a lot of even your
>> system.
>>
>> If the run didn't cause any problems with your recordings, you can continue
>> to use --dd-grab-all, even if it does take a long time to complete.
>>
>> Regardless, if you decide not to use it with 0.23-fixes, you may want to try
>> again when you upgrade to 0.24 (after it's released).  It will have some
>> optimizations that may make it much less resource intensive, even for a
>> 550-channel system.  In truth, I expect with 0.24, your --dd-grab-all run
>> time will be almost the same as a run time without that argument.
>>
>> As far as the DB optimizations go, I'll leave it to others to help.  The one
>> thing I will say, however, is that having the database's binary data files
>> on the same file system that you're using to record could have a huge impact
>> on performance.  In truth, the best setup puts the database on a separate
>> spindle.
> The DB is on a separate disk.  Looking at top during the run,
> mythcommflag was on top, followed by mythbackend, followed by mysqld
> (at least most of the time, occasionally I would see mythfilldatabase
> peek onto the first page, but not very often).  I don't think the
> memory usage was really high or anything, but I'll have to check
> again.  I'll probably wait until 0.24 (I'm running 0.23 fixes from the
> debian multimedia repository on debian unstable) to put it in
> production, but I'll try to give it another shot on an idle system,
> and maybe set up a recording to start during a run to see what
> happens.

This sound a lot like the MySQL behavior you'd see if your processor 
were scaled to its lowest frequency the entire run.  Might be worth 
another test or 2--run once today (maybe while recording something you 
don't care about) to watch the CPU frequency, and if it stays low, run 
again tomorrow after telling the CPU to go to full speed.  If that makes 
it run better, then just script it to freq up, then call 
mythfilldatabase --dd-grab-all, then freq down, and set the script as 
your mythfilldatabase program.

Also, I highly recommend running optimize_mythdb.pl on a daily basis.  
It may not help /that/ much, but it shouldn't hurt.

Mike



More information about the mythtv-users mailing list