[mythtv-users] RAID: full disk or partition (was: Question re: available SATA ports and linux software RAID)

John Drescher drescherjm at gmail.com
Sun Apr 24 01:29:58 UTC 2011

On Sat, Apr 23, 2011 at 7:31 PM, Jean-Yves Avenard <jyavenard at gmail.com> wrote:
> On 13 April 2011 01:13, Robin Hill <myth at robinhill.me.uk> wrote:
>> No, the stripe size must be a multiple of the FS block size - this is 1,
>> 2 or 4k for ext2/3/4 (depending on FS size) and defaults to 4k for XFS
>> (generally this maxes at the memory pagesize, which is 4k for
>> x86/x86-64). So making the array chunk size a multiple of 4k (default is
>> 512k for current mdadm - older versions used 64k I think) will mean the
>> stripe width is irrelevant (so powers of 2 don't come into it at all).
>> It's then up to the md driver to try to write complete stripes
>> (providing enough sequential data is available) .
>> In other words, this should be working optimally out of the box, and
>> there's little you can do to help/hinder it (increasing the
>> stripe_cache_size for RAID5/6 can help, but at the cost of increased
>> memory usage).
>> The other issue (that Ian seems to have had previously) is that
>> partitioned arrays need to be arranged so that partitions fall on stripe
>> boundaries. I wouldn't use a partitioned array anyway - partitioning
>> first, then making multiple arrays seems more sensible to me.
> So..
> When adding a disk to a RAID array ; would you use the full disk (e.g.
> /dev/sda) or a partition (/dev/sda1).
> Every single examples of mdadm usage seem to use partitions rather
> than a full disk

At work where I have 12 to 15 linux software raid arrays I have moved
to using only raid arrays with partitions as raid members. I believe
this is better for management and I usually have more than 1 raid
array using the same disks. Raid1 for boot, raid 5/6 for os and raid
5/6 for data (usually lvm on top of this so I can subdivide the data).

John M. Drescher

More information about the mythtv-users mailing list