[mythtv-users] Slightly OT - How many People have Video libraries over 8TB?
anothersname at googlemail.com
Sat Jul 7 17:05:38 UTC 2012
>> I'm using 2TB Seagate ES drives and just about to move to 3TB drives
>> (which is why the jump to 20TB).
> Is this as simple as replacing one disk at a time and then doing a
> mdadm --grow (+ filesystem grow) at the end?
> How long will the whole process take?
>> If you can't trust that a kernel/mdadm bug isn't going to kill your
>> library you shouldn't even think about doing this.
> Shit happens as they say. I tried converting a RAID1 into a RAID5 the
> other day and for some reason the stripe size ended up at a measly
> It would have taken a whole extra restripe operation to fix this, so I
> was happier to rebuild afresh and restore from backup (which I did a
> test read on first).
> I've also had some odd behavior from one of my non-enterprise Seagate
> drives. It's firmware locked up the other day, the drive not coming
> back without a power cycle.
> I don't want to to find out the hard way what happens if it does this
> again during a restripe. Perhaps it'll be recoverable, but perhaps
> Are you getting the Seagate ES drives at a good price second hand or
> something? New, they're at least twice as expensive as consumer
> They almost certainly behave better in RAID arrays than the cheaper
> drives, but with the money saved buying cheaper disks, you could
> afford a complete extra set for that backup server you're currently
> living without.
> Kind Regards,
> mythtv-users mailing list
> mythtv-users at mythtv.org
I pick up Seagate ES drives opportunistically when I see them around
on E-Bay or similar, I also do work for some corporate clients who
occasionally buy servers with the drives already fitted and not needed
as they're using external arrays so I cut a deal with them. Selling
the 2nd hand 2TB ES's will pay for about 80% of what the 3TB have cost
me (and I did the same going all the way back to 750GB about 5 years
ago) so I guess the drives have cost me about 30-40% of the market
price allowing for the upgrade savings each time.
I've only had one seagate drive fail on me over the last 5-6 years and
seagate happily did a 'cross in transit' swap so I was only without a
drive for 3 days.
As I'm running RAID6 with a hotspare I'm willing to take the technical
risk of a cataclysmic failure, if I lost everything I'd just recreate
it from the original sources again so the extra drive spend for backup
doesn't make sense for me. Also if the array went to degraded (i.e.
RAID5 with a dead drive) I'd just take it offline till I had a move
data plan and replacement drives readily available.
I run monitor software on the drives anyway to 'spot ahead' for any
likely failures and the drives sit in cooled Supermicro array chassis
so their temp never gets above 40 ish degrees (admittedly got to
nearly 50 when we had those REALLY hot days a few summers back but
that was the 1.5GB drives at the time). I'm sure you know it's heat
that kills components so temperature management is key.
I used to use Areca as the raid controllers (had a 24 port one, can't
remember the model number) which I sold for about £400 and picked up
some IBM M1015 8 port cards (before people knew about the flash to
JBOD cheapness fix) for £40 ish each and never had a performance
Hope someone finds this helpful.
More information about the mythtv-users