[mythtv-users] NAS mobo
Nick Bright
boberz at thewatch.org
Thu Aug 21 14:27:34 UTC 2008
Yan Seiner wrote:
> Brian Wood wrote:
>
>> John Drescher wrote:
>>
>>
>>>> Hadn't thought of PCIe, guess I'm living in the past with the 133Mhz.
>>>> 64-bit bus architecture.
>>>>
>>>>
>>>>
>>> PCI-X will work as well but it will require a more expensive server
>>> board. I guess if you are going to spend $2000 US or more on 20 hard
>>> drives you can afford a $500 US mobo though but I would say these make
>>> no effort to conserve power.
>>>
>>>
>> With Terrabyte drives available, I'm not sure a 20-drive array is
>> reasonable for a MythTV system, but I guess some people watch more TV
>> than I do :-)
>>
>> I'm not sure of the OP was interested in the speed or the capacity of
>> such an array, but the speed is limited by the network and the capacity
>> might be overkill.
>>
>> Sounds like one of those "because I can" projects, but they can be the
>> most fun :-)
>>
>> I once heard such things described as a triumph of engineering over
>> common sense.
>>
>>
> LOL!
>
> Well, sort of... It's because I may need to.
>
> I not only store TV shows, I do commercial backups for companies. Sort
> of like rsync.net, only better. ;-)
>
> So my once-ample 1.5 TB array has been filled to capacity and then some.
>
> The idea is to get a box that I can stick in my rack, and then add 1TB
> drives as needed and grow the array. The box would only be a NAS; no
> processing at all. Since the streams are limited to mythv + internet, a
> gigabit connection should be ample for the forseeable future.
>
> If I can figure out a path to a 20TB array that I can implement today
> with, say, 3 1TB drives, and add drives as needed, I may be ahead of the
> game for a few years.
>
> What I don't want to do is to implement a solution today and then have
> to rip it out a year later. If nothing else, do you have any idea how
> long rsync takes to dump and verify a TB over an ethernet connection? 2+
> days, and it's not quite done..... I hate to think what I'd have to do
> with a 5 or 10 TB array.....
>
PCI-X is still very valid in the server space, and lots of good cards
are available. PCI-E is catching up with it in terms of penetration
though. In a 20 disk configuration like this PCI-X will be just as good
as PCI-E.
Personally, I like LSI MegaRAID cards, which support multi-adapter
ganging. So for example if you need to raid up 20 drives, you could use
three 8 drive controllers (24 ports), or two 8 drive controllers and a 4
drive controller (20 ports). I have used a number LSI cards and always
been satisfied with reliability, performance, and features. LSI MegaRAID
cards are also very well supported in Linux, with the driver included in
basically every install disk out there.
Areca cards are also very solid performers, considered by many to be the
best of the best. These also support multi-adapter ganging. I have not
personally used one.
Another option is to use SATA2 port multiplexers, in which you can
attach four drives to each SATA port. I don't know how much port
multiplexers cost, or where to get them; but most high end cards will
support them.
A quick google turned up this link:
http://www.supereasybuy.com/ssproduct.asp?pf_id=1011036473 but I don't
know if it's a good price, and I don't know anything about that website.
Just an example of the type of product I'm talking about.
As far as what to stay away from, I have consistently heard negative
things about 3ware cards; and the ones I have personally used have
always been very disappointing in terms of performance.
Considering that you are doing this for business, I suggest staying away
from the cheap cards like Highpoint and 3ware. Definitely go with LSI or
Areca. Remember to get BBU modules so you can use write-back caching!
You'll get better performance with one port per card, but you can save
money and still get decent performance using port mulitpilers. Remember
that even though the interface is 3gbps, the drives can't put that out
except in short buffer to host bursts - but you will want to disable the
disk buffers anyways (for safety, in case of power outage. Same reason
you need a BBU on the raid card).
A very nice feature of the LSI or Areca cards is that they support
on-line hot-adding more drives to the array - which is exactly what you
want to do. You insert a drive, fire up the on-line manager program, add
the drive to the array, it rebuilds with the new disk and then you
expand your filesystem - all without having to go offline.
- Nick
> --Yan
>
>
More information about the mythtv-users
mailing list