[mythtv-users] RAID: full disk or partition (was: Question re: available SATA ports and linux software RAID)

Jean-Yves Avenard jyavenard at gmail.com
Sat Apr 23 23:31:18 UTC 2011

On 13 April 2011 01:13, Robin Hill <myth at robinhill.me.uk> wrote:
> No, the stripe size must be a multiple of the FS block size - this is 1,
> 2 or 4k for ext2/3/4 (depending on FS size) and defaults to 4k for XFS
> (generally this maxes at the memory pagesize, which is 4k for
> x86/x86-64). So making the array chunk size a multiple of 4k (default is
> 512k for current mdadm - older versions used 64k I think) will mean the
> stripe width is irrelevant (so powers of 2 don't come into it at all).
> It's then up to the md driver to try to write complete stripes
> (providing enough sequential data is available) .
> In other words, this should be working optimally out of the box, and
> there's little you can do to help/hinder it (increasing the
> stripe_cache_size for RAID5/6 can help, but at the cost of increased
> memory usage).
> The other issue (that Ian seems to have had previously) is that
> partitioned arrays need to be arranged so that partitions fall on stripe
> boundaries. I wouldn't use a partitioned array anyway - partitioning
> first, then making multiple arrays seems more sensible to me.


When adding a disk to a RAID array ; would you use the full disk (e.g.
/dev/sda) or a partition (/dev/sda1).
Every single examples of mdadm usage seem to use partitions rather
than a full disk

More information about the mythtv-users mailing list