[mythtv-users] 0.27: Stalls and other niggles

Anthony Giggins seven at seven.dorksville.net
Thu Jan 16 22:20:22 UTC 2014


On 17 January 2014 06:38, Mike Thomas <mt3 at pfw.demon.co.uk> wrote:

> On Thu, 16 Jan 2014 13:27:00 -0500
> "Michael T. Dean" <mtdean at thirdcontact.com> wrote:
> > On 01/16/2014 11:32 AM, Mike Thomas wrote:
> > > What I have noticed is mysqld almost always seems to be the process
> > > doing the most I/O at the moment of the stall, but is rarely doing
> > > any I/O at other times. It is almost as if mysqld is batching
> > > updates to its tables and logs.
> >
> > Sounds like you're storing your MySQL data on a file system with
> > barriers enabled, so mysqld writes data to disk and the file system
> > barriers cause it to block until all data is physically written to
> > disk platters (not just to disk cache)--which takes a long time.
> > During this time, MythTV (and everything else) is unable to access
> > required data in MySQL.
> >
> > The proper fix is /not/ just disabling barriers, instead making it so
> > that data can be written safely--using barriers--fast enough to
> > prevent impacts to system services.  There are many things you can do
> > to fix this, each with its own advantages/disadvantages (and costs),
> > but before going there, I'd recommend just remounting your
> > MySQL-data-containing file system with barriers disabled to see if
> > this is, in fact, the problem you need to fix.  If so, then come back
> > to find the best fix for you.
>
> Dear Mike,
>
> Thank you for your suggestion. I remounted the fs containing mysql with
> the options I normally use for big compile jobs:
>
> /dev/mapper/vglocal-home00 on /export/home00 type ext4
> (rw,noatime,nosuid,barrier=0,journal_async_commit,data=writeback)
>
> This setting normally doubles performance on long compile jobs, and it
> made a difference to MythTV too. The stalling went away completely and
> the playback didn't abort when the recording stopped, although it did
> stall for a split second. I noticed that the backend frequently spat
> out messages like this:
>
> 2014-01-16 19:04:40.367312 W [8290/8435] ProcessRequest
> ringbuffer.cpp:1035 (WaitForReadsAllowed) -
> RingBuf(/export/home50/video/1003_20140116185400.mpg): Taking too long
> to be allowed to read..
>
> Live TV also sprang into life (with the above messages), but changing
> channel with the guide was a bit odd: the picture returned to that of
> the previous channel and sat there for several seconds whilst
> mythbackend spewed:
>
> 2014-01-16 19:17:32.674910 I [8290/8813] ProcessRequest
> ringbuffer.cpp:1098 (WaitForAvail) -
> RingBuf(/export/home50/video/1010_20140116191706.mpg): Waited 4.0
> seconds for data to become available... 816100 < 917504
>
> and then it worked just fine. I suspect my video adapter buffers are
> huge (one of my tweaks for test purposes). Clearly the software wants
> to work, but I've yet to find a satisfactory database and disc layout.
>
> I configured the test database as a carbon copy of my MySQL 5.1
> database for MythTV 0.22. That runs from a file system mounted like so:
>
> /dev/mapper/vglocal-home92 on /export/home92 type ext4
> (rw,barrier=1,commit=5)
>
> and mysqld.cnf contains nothing significant apart from :
>
> default-storage-engine=myisam
> max-allowed-packet=1M
> max-binlog-size=16M
> safe-user-create
> sync-binlog=1
>
> and everything else is left at the system defaults. I use no sysctls or
> anything fancy, and it's worked a treat ever since.
>
> For MythTV 0.27 I've switched from MySQL 5.1 to MySQL 5.6 and I imagine
> the mysqld.cnf needs some changes. I'd like not to sacrifice recovery
> by degrading durability. I wonder if you might care to share your
> mysqld.cnf?
>
> Thank you for your help.
>
> Regards,
>
> Mike.
>

Further to this I recently switched to the deadline I/O scheduler as I was
finding from a remote frontend the playback of recordings and videos would
exit out to the menu while I was running transcode jobs on the backend.

The other frontend which is on the Backend would normally playback fine
while running transcode jobs however it was prone to pausing if running
back to back transcodes (max=2 jobs).

I may increase the max jobs now to either 3 or 4 to match the core count on
the backend, but 2 seems a rather safe value.

Cheers,

Anthony
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.mythtv.org/pipermail/mythtv-users/attachments/20140117/6772e3e3/attachment.html>


More information about the mythtv-users mailing list