[mythtv-users] mdadm help, please
John Drescher
drescherjm at gmail.com
Fri Apr 24 03:59:23 UTC 2009
On Thu, Apr 23, 2009 at 11:46 PM, Joel Means <means.joel at gmail.com> wrote:
> I seem to be having major issues with my primary storage for all of my
> recordings and rips. I am running Debian Lenny with a 2.6.29 kernel,
> compiled myself. I have a seven disk RAID5. My problem is that when I
> assemble my array, the array size shown is too small. Here is what I get
> from mdadm -D /dev/md0:
>
> /dev/md0:
> Version : 00.90
> Creation Time : Thu Apr 23 21:25:14 2009
> Raid Level : raid5
> Array Size : 782819968 (746.56 GiB 801.61 GB)
> Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
> Raid Devices : 7
> Total Devices : 7
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Thu Apr 23 21:25:14 2009
> State : clean, degraded
> Active Devices : 6
> Working Devices : 7
> Failed Devices : 0
> Spare Devices : 1
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> UUID : 16b9b60d:ab7eb6b3:4bb6d167:00581514 (local to host
> meansnet.homelinux.org)
> Events : 0.1
>
> Number Major Minor RaidDevice State
> 0 8 33 0 active sync /dev/sdc1
> 1 8 97 1 active sync /dev/sdg1
> 2 8 65 2 active sync /dev/sde1
> 3 8 49 3 active sync /dev/sdd1
> 4 8 1 4 active sync /dev/sda1
> 5 8 81 5 active sync /dev/sdf1
> 6 0 0 6 removed
>
> 7 8 17 - spare /dev/sdb1
>
>
> Using mdadm -E on each of the drives gives the correct info:
>
> /dev/sda1:
> Magic : a92b4efc
> Version : 00.90.00
> UUID : 16b9b60d:ab7eb6b3:4bb6d167:00581514 (local to host
> meansnet.homelinux.org)
> Creation Time : Thu Apr 23 21:25:14 2009
> Raid Level : raid5
> Used Dev Size : 488383936 (465.76 GiB 500.11 GB)
> Array Size : 2930303616 (2794.56 GiB 3000.63 GB)
> Raid Devices : 7
> Total Devices : 8
> Preferred Minor : 0
>
> Update Time : Thu Apr 23 21:25:14 2009
> State : clean
> Active Devices : 6
> Working Devices : 7
> Failed Devices : 1
> Spare Devices : 1
> Checksum : 68722cb1 - correct
> Events : 1
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 4 8 1 4 active sync /dev/sda1
>
> 0 0 8 33 0 active sync /dev/sdc1
> 1 1 8 97 1 active sync /dev/sdg1
> 2 2 8 65 2 active sync /dev/sde1
> 3 3 8 49 3 active sync /dev/sdd1
> 4 4 8 1 4 active sync /dev/sda1
> 5 5 8 81 5 active sync /dev/sdf1
> 6 6 0 0 6 faulty
> 7 7 8 17 7 spare /dev/sdb1
>
>
> Note the difference in Array Size. I can remove and re-add /dev/sdb1, and
> te array will start rebuilding, but the info from 'mdadm -E' doesn't
> change. Running 'fsck.jfs' on /dev/md0 gives me an error about corrupt
> superblocks, so I don't know if my data is hosed or not. This was working
> fine for over a year. It could be the upgrade to the new kernel that did
> this, but trying to revert to the older kernel gave me several other
> issues, Does anyone have any thoughts on what might be done to fix this?
> Or can you point me to where I might find more mdadm experts? Thanks.
>
Can you post the output of cat /proc/mdstat?
How did you assemble the array? With the force option? And no you
array probably is not hosed although I am worried why the spare is not
being added automatically.
Is the output of
sfdisk -d /dev/sda
the same as all other disks?
Do you have mdadm 2.6.4 or greater installed?
John
John
More information about the mythtv-users
mailing list