[mythtv-users] The Bigger... Disk contest, Fall 2007 edition

f-myth-users at media.mit.edu f-myth-users at media.mit.edu
Thu Oct 18 21:00:10 UTC 2007


    > Date: Thu, 18 Oct 2007 13:35:31 -0600
    > From: Brian Wood <beww at beww.org>

    > I can only speak from my experience over many years. Whenever I have a
    > drive fail I put the dead one on my window sill. I have two piles, ATA
    > and SCSI. The ATA pile is about a foot high, the SCSI pile has one drive.

[You never return them for the warrantee?]

Three massive problems with this approach, all related to sample bias
(and otherwise known as "the plural of anecdote is NOT data"):

(a) How many of each drive are in service?  (That's "drives whose
    failures would cause them to wind up on your windowsill".)  After
    all, if you've got 50 ATA drives and 3 SCSI drives, well...
(b) Assuming that the answer to (a) is "half of each", then when's
    the last time you added to the ATA pile?  [If it's "a long time
    ago", then maybe ATA reliability has increased (alternatively,
    perhaps SCSI reliability is decreasing... :)]
(c) Are both types of drives subject to -exactly- the same sorts
    of service?  Or are the SCSI drives in temperature-controlled
    machine-room racks and are never powered off, whereas the desktop
    drives get powered off every day and are in desktop machines that
    get moved around, dropped, kicked, or knocked over, and in which
    inquisitive little non-properly-ESD-protected hands occasionally
    reach in to reconfigure things?  (Remember that a -lot- of what
    kills drives is powercycling and thermal cycling; remember also
    that desktops occasionally get tossed in a trunk, driven
    somewhere, and then plugged right back in even though they were
    brought to freezing and then not allowed to have all the
    condensation evaporate off and the disk platters rewarm back to
    nominal dimensions.  Sure, that's bad, but I'd seen it.  Also, the
    senior technical engineer of a company I know that makes massive
    disk-based stores for, e.g., banks told me that they routinely
    lose at least one disk in each huge array every time the array
    must be powercycled---despite using the most reliable disks they
    can get their hands on.)

And yes, consult the CMU study; that's the one I was talking about
when this came up here a few weeks/months ago.

P.S.  Given a 5 to 1 price differential (as someone else here quoted),
just HOW many redundant ATA drives can you put in service for each
SCSI drive?  And let's not also talk about how much more robust such a
solution can be, given some geographic separation (be it a room or a
city) against fire, flood, theft, accidentally being kicked over,
power supplies run amok, and the occasional "rm -rf /" or mistake
with dd or cfdisk...


More information about the mythtv-users mailing list