<br><div class="gmail_quote">2008/6/5 Gerald Brandt <<a href="mailto:gbr@majentis.com">gbr@majentis.com</a>>:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>I've been running a 5x250 software raid5 array for a few years. It's easy to setup and maintain and uses very little CPU. If I had the cash, I'd prob convert to hardware raid5, and buy 2 controllers (1 spare).<br>
<br>Sadly, I don't have the cash, so my biggest decision now is: when I build a new system, do I go software raid 5 or software raid 6?<br></div></blockquote><div><br>If cash is an issue then you need to look at how likely you are to have more than 1 drive fail at any given time. How many failures have you had in the last few years? <br>
<br>How many drives are you planing on RAIDing? The more drives the more likely to fail. How important is the data? How regular do you backup? (EG, I have a RAID 5 array, that I can't afford to backup, so I'm more reliant on the redundancy as I really don't want to loose data.)<br>
<br>If you're building a box just for storage Suns ZFS looks really good, and it does solve some of the problems in software that you would otherwise need a hardware RAID card for (or much reduced performance). It's also easy to admin and fairly scalable. <br>
<br>However, this does mean a move to Solaris (or Nexenta; which is GNU based), and learning a new (similar) system isn't everyone's idea of fun. <br><br>Aside: As there seem to be a lot of linux RAID experts round here, how critical is the number of discs in a linux RAID 5 array? I've got a 4 disc RAID 5 array on NetBSD, but I suffer from performance issues (as the number of discs is not a power of two + 1), but I'm wondering if linux is more flexible in this regard. (NetBSDs FFS needs a block size that's a multiple of 2, so tuning stripe size and block size to ensure a full stripe write whenever possible is not possible with 4 discs.)<br>
<br>Ian<br></div><br></div><br>