[mythtv-users] LVM Problem -- Please Help

gLaNDix (Jesse Kaufman) glandix at lloydnet.org
Tue Aug 15 16:23:00 UTC 2006


Tom+Dale wrote:
> I have made arrangements to RMA this drive (NewEgg really has excellent 
> customer service).  So my other questions weigh heavily on my mind 
> (i.e., will I lose ~200GB of data from the still functioning drives that 
> were part of this LVM? Or will I be able to put in an identical, 
> functioning replacement and give it a UUID that the LVM likes in order 
> to access my original drives?

[side note, make sure you read the whole e-mail before starting to type 
commands as i'm not 100% sure, since my situation was a little different 
:) ... and if i'm wrong on anything, please anyone speak up so this guy 
doesn't lose any data on my behalf!]

hopefully not! ;) ... i had a (slightly) similar issue earlier this week 
when i upgraded my mobo/cpu and reinstalled FC5 for my mythbackend OS 
... my issue was that i had the LVM volume spanned across 3 disks, one 
being /dev/hda, which i had a feeling was starting to crap out ... what 
you need to do is type:

pvdisplay

you should get information about the physical drives involved in any of 
your LVM volumes ... it will show something like "unknown" for the PV 
Name (instead of /dev/sda1) for the SATA drive that's gone bad ... but, 
for the "Allocatable" line, it should show "yes" (hopefully not "yes 
(but full)") ... i assume since you noticed the sounds fairly quickly, 
you or your system hadn't had time to write information to that drive 
yet ... if that's the case, you're definitely safe! :)

so, we'll assume that no data has been written to that drive yet ... to 
remove the physical volume (pv) from the volume group (vg), you have to 
use vgreduce:

vgreduce [volumeGroupName] /dev/sda[0-9]

where [volumeGroupName] is *gasp* your volume group name ;) and [0-9] is 
whatever partition you were using on your SATA drive ... i'm guessing 
it's probably 1 ...

that's the normal way of removing a physical volume from a volume group, 
BUT, if that doesn't work (i can't recall, but that might give you an 
error saying it can't find drive with UUID XXXX.....), you'll have to run:

vgreduce --removemissing [volumeGroupName]

that will just remove any missing physical volumes from the volume group 
and after that, you should be able to mount your vg again ...

now, one thing with my situation that you don't have in yours is that my 
drive was working well enough that i could resize my ReiserFS partition 
down by 63GB (the size of the partition on /dev/hda that was part of my 
volume group) before doing any of this .. then, i used pvmove to move 
any data from that drive to the others (since mine definitely was in 
use) ... after that i used pvremove prematurely to remove the LVM 
information from the physical volume ... at that point, i was where you 
are ... with a volume group that did not work because it was looking for 
that drive, but that drive (as far as LVM was concerned) was missing ... 
from that point, i followed the above instructions... i can't recall, 
but you may have to do a vgchange -a n [volumeGroupName] to tell your 
system that the volume group is not available before you do any of the 
above ... and then vgchange -a y [volumeGroupName] to tell your system 
it's back online after the above ... you should be able to do this w/o 
any reboots, which (for me) helps, because you get to the end point 
faster! :) ... also, if you do have data on your SATA drive and you need 
to use pvmove and resize your filesystem (if you can get the drive to 
work long enough), remember that pvmove and resize_reiserfs or resize2fs 
can take a LOOOOOOOOOONG time ... my pvmove for 63GB of data took 
overnight to finish :S

i hope this helps you out!  i know how terrifying it can be thinking you 
might lose 100's of gigs worth of data!!!  sorry this isn't very 
organized ... just kinda a quick brain dump, since i just did this on 
Sunday of this week!

-g-


More information about the mythtv-users mailing list