<br><br><div class="gmail_quote">On 16 March 2010 19:11, John Drescher <span dir="ltr"><<a href="mailto:drescherjm@gmail.com">drescherjm@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div class="im">> Believe you can do a raid5 configuration with 3 drives. However<br>
> generating the parity bit slows things down on the write.<br>
><br>
<br>
</div>That depends on tuning and how often the writes occur. The forced<br>
flushes with multiple streams work very much against performance.<br>
Tuning the stripe cache can reduce this problem but that risks<br>
corruption in the unlikely event that the system looses power or<br>
crashes.<br>
<font color="#888888"><br></font></blockquote><div><br>Just out of interest (and slightly OT), how does linux cope speed wise when your RAID5 array isn't (n+1) drives (where n is a power of 2)? I'm using RAID5 on a NetBSD box and writing to a 4 drive RAID5 array is terrible, which I think is due to disparity between the FS blocksize (which is a power of 2) and the data per stripe (which isn't a power of two.) meaning lots of read+write for every write to calc the extra parity.<br>
<br>Is this an issue on linux?<br><br>Cheers,<br><br>Ian<br><br></div></div>