On Mon, Nov 24, 2008 at 3:24 AM, Jake Anderson <span dir="ltr"><<a href="mailto:yahoo@vapourforge.com">yahoo@vapourforge.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div bgcolor="#ffffff" text="#000000">
vamythguy wrote:
<blockquote type="cite"><div><div></div><div class="Wj3C7c">On Sun, Nov 23, 2008 at 9:29 PM, Brad DerManouelian <span dir="ltr"><<a href="mailto:myth@dermanouelian.com" target="_blank">myth@dermanouelian.com</a>></span>
wrote:<br>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>On Nov 23, 2008, at 9:14 PM, vamythguy wrote:<br>
<br>
> Ok. Let me start with - I don't get LVM + RAID. The idea of being<br>
> able to throw differently sized disks in one side and having a<br>
> failure resistant dynamically extendable disk solution come-out the<br>
> other is great, but I don't get LVM + RAID. Specifically, how it<br>
> works. Why both?<br>
<br>
</div>
Read more for your answer.<br>
<div><br>
> Also, I love the idea of not being constained by the number of
slots<br>
> in a box, so the extent to which that can be abstracted across a<br>
> protocol like iSCSI or AoE or eSATA would be great - especially<br>
> since performance is only marginally important to me. So, how<br>
> would.does something like this work/get built?<br>
<br>
</div>
There's your answer to the question above and the explanation above<br>
answers the question here. That was easy.<br>
<div>
<div><br>
_______________________________________________<br>
mythtv-users mailing list<br>
<a href="mailto:mythtv-users@mythtv.org" target="_blank">mythtv-users@mythtv.org</a><br>
<a href="http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users" target="_blank">http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users</a><br>
</div>
</div>
</blockquote>
</div>
<br>
Great! All the pieces are in the box - so, how do you put them
together? So I get how LVM let's me keep throwing disks at it, but how
do I get fault-resistance? RAID, right? But I don't think RAID likes
new disks, does it? So, maybe starting with how to get that to work.<br>
</div></div><pre><hr size="4" width="90%">
</pre>
</blockquote>
Don't RAID your mythtv stuff.<br>
Storage groups work much better in terms of disk IO/seeking when under
load, as a result your disks will probably last longer.<br>
HDD failures are so rare these days don't bother with it, worst case
you loose some TV, you should have better things to do anyway ;-P<br>
<br>
If you have some other stuff you want to do on the machine then split
some disks into partitions, use some partitions for software raid
(MDADM) and some for storage groups. That is what I have now (3x 320gb
drives with a 60gb raid partition on each and the rest as XFS parts for
storage group, boot is off a 20gb part on a 420gb disk with the rest
set as a storage group). If you start with 3 disks in your RAID you get
RAID 5, you can add disks to raid5 arrays with mdadm now then grow the
file system to handle the added space. If you pick a file system that
supports online growth you can do that whole thing with 0 down time,
pretty heavy performance hit though.<br>
<br>
</div>
<br>_______________________________________________<br>
mythtv-users mailing list<br>
<a href="mailto:mythtv-users@mythtv.org">mythtv-users@mythtv.org</a><br>
<a href="http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users" target="_blank">http://mythtv.org/cgi-bin/mailman/listinfo/mythtv-users</a><br>
<br></blockquote></div>So I guess part of my issue with the RAID stuff is how I've got 3x250 and 1x300 in a RAID5 array, so I'm missing 50G. This is because I had a 250 go bad and replaced it with a 300. Ideally, that extra 50 would've just been brought in and be usable - even if not RAID5. Maybe I'm looking for too much. I've had problems before with making a decision one way and then not being able to change it (because of the size of the filesystem), so I'm trying to be smarter about it.<br>
<br>All of which skips my other issue, that being the reclamation of that orphaned array and figuring-out how to get back at it (given I've had issues in the past with moving the disks of an array to a new mobo, a new instance of mdadm, whatever, and having it not be recognized).<br>
<br>Finally, is a multi-port eSATA solution a good way to externally house these drives? Is AoE a real option?<br><br>I know, I'm all over the place here...<br>