[mythtv] [mythtv-commits] Ticket #1404: Invalid file error at

Bob Cottingham bobnvic at gmail.com
Mon Feb 27 19:50:18 UTC 2006


On 2/27/06, f-myth-users at media.mit.edu <f-myth-users at media.mit.edu> wrote:
>    Date: Mon, 27 Feb 2006 02:05:54 -0500 (EST)
>    From: "Chris Pinkham" <cpinkham at bc2va.org>
>    Well, since the cron that was running was a database backup, odds are it
>    was probably the recordedmarkup table being locked so the recorder
>    couldn't write out the rest of the seektable information to the database.
>
>    > happens at any other time I suppose it may just cause some prebuffer
>    > pauses that it can recover from?  I changed the time for the cron job
>    > to 4:40am so that it will not likely occur at the time of a file
>    > change again. I am going to leave the FE watching livetv all night to
>    > see what happens at 4:40am.
>
>    You might get (un)lucky and it won't occur, it could just be the timing
>    between when the program changed and when the cron was backing up a
>    certain table (probably recordedmarkup since it's the largest table in
>    the database), so it might not happen at 4:40am.
>
> Seems to me that this would be easy enough to test deterministically.
> Either wait for a program transition and then run a mysqldump across
> it, or embed mysqldump in a loop that calls it repeatedly (either with
> no delay, or with a few seconds in between to let the system breathe)
> and see what glitches.  You could even play around with nice-ing it
> higher or lower than the default.
>
> (I wonder if the behavior would differ based on whether mysqldump
> wrote directly to disk, or through gzip --best, or to /dev/null?
> The former would thrash the heads hardest; the middle might slow
> it down just enough that the DB isn't locked solid (or might extend
> the duration of a lock instead), and the latter would load the DB but
> involve no disk motion besides the DB itself.  Or maybe bzip2 instead
> of gzip, since it uses a -lot- more CPU to get that extra 5-10%...)
>
> Another idea might be to just forcibly acquire a write lock on
> recordedmarkup with, e.g., LOCK TABLE and see how long it takes
> for the rest of Myth to explode. :)
>
> P.S.  I -will- note that I did quite a bit of testing under 0.18.1
> to see what sorts of loads would break things, and didn't see any
> problems running mysqldump | gzip --best even when I had six SD tuners
> writing to the same disk as the DB and the gzip output file.  Nor have
> I seen problems copying many GB over a 100baseT (not gigabit!) NIC
> while similarly recording 6 SD streams; all on a typical Athlon 2800+.
> I just can't stress the disk hard enough to cause a problem, unless I
> try to delete many GB under ext3fs, which caused recording hiccups
> 'cause the FS was locked too long---after I ran that test, I switched
> to JFS.  Granted, all these tests were in 18 and have nothing to do
> directly with the OP's problem or version, but they do show that at
> least under those circumstances DB load didn't seem to be an issue.

Well, I had two frontends watching LiveTV while another scheduled show
was recorded and ran

mysqldump -uroot -pxxxxx mythconverg > mythconverg.sql

This created a 131MB file which totally hosed all of the livetv
recordings (not sure about the scheduled recording yet).  The MBE, NFS
file storage and MYSQL servers are all on the same machine.  That
machine has four drives, one drive for / and three 250GB drives in LVM
for /var (mythtv records to /var/mythtv).  The 131MB file was saved to
the OS drive.  The master backend logs showed

2006-02-27 12:40:14.232 TFW, Error: Write() -- IOBOUND begin cnt(2048)
free(2047)
2006-02-27 12:40:14.284 TFW, Error: Write() -- IOBOUND end

This was repeated five times in a row.  The mythfrontend logs showed
continuous prebuffer pauses from that point on (they never recovered).
 Note that the mysqldump file wasn't even writing to the same drive as
the backend, though it was reading from the same drive (mythconverg
database and recordings are both under /var).  This would make it
appear that my system simply can't handle that much IO going on, or at
least not the way I may have it configured.  I have an nForce2 system
with an AthlonXP 2500+ and 512MB RAM for the MBE server.  Any ideas
how I prevent such IOBOUND errors when writing large files? Using
'nice' made no difference.

Thanks,
Bob


More information about the mythtv-dev mailing list