[mythtv-users] Backend with storage group on raid array

Andy Burns mythtv.lists at burns.me.uk
Sat Oct 15 23:27:10 UTC 2011


On 15 October 2011 18:12, Raymond Wagner <raymond at wagnerrp.com> wrote:

> This problem has been discussed before at length on this mailing list
> and other forums.

Yeah, sorry for not searching before posting, I thought most people
would have dedicated myth backends and therefore use several
individual disks as multiple storage groups, rather than raid arrays.

>  When MythTV records, it runs a loop that flushes the
> data to the platter roughly once per second per recording.  This
> prevents the buffer from filling too large and getting so that when the
> OS decides to flush on its own, it locks up the system for several
> seconds causing a loss of new capture data.

Suppose that makes sense.

> Each time it flushes, it has to seek to the data location, write the
> data, and then seek to a handful of metadata clusters on the disk to
> record what it just did.  Should your free space be fragmented, and the
> filesystem unable to write the full 2MB or so you have recorded in one
> shot, it will have to seek to a new location and write out the remaining
> data.  If you are recording multiple shows, this process will repeat
> multiple times, resulting in dozens of possible seeks, each eating on
> average 5-10ms.  Reading at playback speed won't be much of an issue,
> unless you have the filesystem mounted to record access times.

I have relatime, could switch to noatime ...

> Still though, a single 750GB 7200RPM disk should bottom out better than
> 50MB/s on its outer edge, and even if half of its time is consumed with
> seeking, you still have far more than sufficient bandwidth left to
> record the 8MB/s of four HD recordings.  The problem is that you're
> using RAID5.

It could be worse than that, what I actually have is md raid5 with
lvm, the lv is then exported to a Xen virtual machine within the same
server which is my backend with the tuners. I mention the Xen at first
glance in case it clouded the issue, though I don't think it's adding
significant overhead in this case.

> First, since the disks are striped, they all function in
> lock step, meaning they all function as slow as the slowest drive.

These are all identical, I don't like arrays of mixed disks for that reason.

> Second, because of the use of parity, it suffers what is called the
> 'write hole'.  Any write that is not exactly on stripe boundaries will
> require all the existing data to be read, and new parity to be
> calculated, before it can be written out.
>  Battery backed hardware RAID
> can fake this, telling the system it has written to disk while
> internally re-ordering stuff for better efficiency, because the battery
> backup ensures any writes that never made it onto the platter can be
> replayed when the system is turned back on.

I do have a SmartArray P400 card I could use instead (I'd have to buy
a replacmemn battery pack) I'm sure it ould work fine, but the reason
I'm not using it at the moment is that H/W raid is great in a
commercial setup where it can be covered by spares or maintenance
contract, but in a home setup mdraid works regardless of what you
connect the disks to ...

> ZFS can do the same with a
> non-volatile flash drive used as a level 2 cache (L2ARC).

Sounds like a good way to wear out a flash disk - concentrate all your
write traffic through it!

> In MDADM,
> you're just screwed, resulting in write performance far less than that
> of a single drive.

Writing 4GB in 43 seconds is a sustained 95MB/s, it must by the forced
syncs that hurt performance.

> You don't want to put recordings on your boot disk, since that will
> merely include MySQL and all its seeking issues into your problems.

I realised that would be frying pan and fire,

> You don't want to shrink your existing array due to all the hassle that entails.

I could live with doing that.

>  The quickest, easiest, and likely best solution will be to
> simply pick up one or two extra hard drives

This machine already has 10 drives, which is why possibly removing 2x
750GB from the array, swapping them with the 2x 250GB, then using the
250GB ones as a non-raid stripe just live TV only seems worth a
thought,

> connect those
> independently, and record to those instead.  If you've got a couple old
> 250GB+ drives laying around, those would work great.  If you want to
> retain the redundancy of your RAID array, add your array as a new
> storage group.  Periodically move the files off your recording drive
> onto the array, and flip the name of the storage group MythTV thinks the
> recordings are in, in the database.

Thanks ...


More information about the mythtv-users mailing list