I seem to be having major issues with my primary storage for all of my recordings and rips. I am running Debian Lenny with a 2.6.29 kernel, compiled myself. I have a seven disk RAID5. My problem is that when I assemble my array, the array size shown is too small. Here is what I get from mdadm -D /dev/md0:<br>
<br>/dev/md0:<br> Version : 00.90<br> Creation Time : Thu Apr 23 21:25:14 2009<br> Raid Level : raid5<br> Array Size : 782819968 (746.56 GiB 801.61 GB)<br> Used Dev Size : 488383936 (465.76 GiB 500.11 GB)<br>
Raid Devices : 7<br> Total Devices : 7<br>Preferred Minor : 0<br> Persistence : Superblock is persistent<br><br> Update Time : Thu Apr 23 21:25:14 2009<br> State : clean, degraded<br> Active Devices : 6<br>
Working Devices : 7<br> Failed Devices : 0<br> Spare Devices : 1<br><br> Layout : left-symmetric<br> Chunk Size : 64K<br><br> UUID : 16b9b60d:ab7eb6b3:4bb6d167:00581514 (local to host <a href="http://meansnet.homelinux.org">meansnet.homelinux.org</a>)<br>
Events : 0.1<br><br> Number Major Minor RaidDevice State<br> 0 8 33 0 active sync /dev/sdc1<br> 1 8 97 1 active sync /dev/sdg1<br> 2 8 65 2 active sync /dev/sde1<br>
3 8 49 3 active sync /dev/sdd1<br> 4 8 1 4 active sync /dev/sda1<br> 5 8 81 5 active sync /dev/sdf1<br> 6 0 0 6 removed<br>
<br> 7 8 17 - spare /dev/sdb1<br><br><br>Using mdadm -E on each of the drives gives the correct info:<br><br>/dev/sda1:<br> Magic : a92b4efc<br> Version : 00.90.00<br> UUID : 16b9b60d:ab7eb6b3:4bb6d167:00581514 (local to host <a href="http://meansnet.homelinux.org">meansnet.homelinux.org</a>)<br>
Creation Time : Thu Apr 23 21:25:14 2009<br> Raid Level : raid5<br> Used Dev Size : 488383936 (465.76 GiB 500.11 GB)<br> Array Size : 2930303616 (2794.56 GiB 3000.63 GB)<br> Raid Devices : 7<br> Total Devices : 8<br>
Preferred Minor : 0<br><br> Update Time : Thu Apr 23 21:25:14 2009<br> State : clean<br> Active Devices : 6<br>Working Devices : 7<br> Failed Devices : 1<br> Spare Devices : 1<br> Checksum : 68722cb1 - correct<br>
Events : 1<br><br> Layout : left-symmetric<br> Chunk Size : 64K<br><br> Number Major Minor RaidDevice State<br>this 4 8 1 4 active sync /dev/sda1<br><br> 0 0 8 33 0 active sync /dev/sdc1<br>
1 1 8 97 1 active sync /dev/sdg1<br> 2 2 8 65 2 active sync /dev/sde1<br> 3 3 8 49 3 active sync /dev/sdd1<br> 4 4 8 1 4 active sync /dev/sda1<br>
5 5 8 81 5 active sync /dev/sdf1<br> 6 6 0 0 6 faulty<br> 7 7 8 17 7 spare /dev/sdb1<br><br><br>Note the difference in Array Size. I can remove and re-add /dev/sdb1, and te array will start rebuilding, but the info from 'mdadm -E' doesn't change. Running 'fsck.jfs' on /dev/md0 gives me an error about corrupt superblocks, so I don't know if my data is hosed or not. This was working fine for over a year. It could be the upgrade to the new kernel that did this, but trying to revert to the older kernel gave me several other issues, Does anyone have any thoughts on what might be done to fix this? Or can you point me to where I might find more mdadm experts? Thanks.<br>
<br>Joel<br><br>