[mythtv-users] FC4 software raid (LVM)

John Johnson johnatl at mac.com
Sun Feb 26 20:06:03 UTC 2006


Glen,

I have a PE running FC3 with 4 USB2 external drives attached to an 
add-in card.
I use LVM2 without using md. LVM2 allows you to expand and contract, 
remove drives, etc. without breaking things (assuming you run the 
requisite commands first, of course).

PhysicalVolumes are made up of PhysicalExtents (typically thousands). 
This could be thought of in the same way as Partitions being made up of 
Sectors.
VolumeGroups are made up of 1 or more PhysicalVolumes.
LogicalVolumes are created in VolumeGroups. There can be more than one 
LogicalVolume in a VolumeGroup.
Then you create a File System on the LogicalVolume just like it was 
another hard drive.

(Note that the commands below assume you don't mind erasing your 
drives.)
The first thing to do is create your PhysicalVolumes:
pvcreate /dev/sdb
pvcreate /dev/sdc
pvcreate /dev/sdd
pvcreate /dev/sde

These commands create a PhysicalVolume using the whole drive. You can 
also use partitions on the drives, but I didn't bother. If you did use 
a partition, you would 'pvcreate /dev/sdb1' and so forth.

Then create a VolumeGroup from them:
vgcreate -s 16M striped /dev/sdb /dev/sdc /dev/sdd /dev/sde

The -s 16M allows your LogicalVolume (coming next) to be up to 1 
terabyte. Use -s 8M for up to 512G, the default -s 4M up to 256G. You 
can't change this after the VolumeGroup is created.

Now see how many PhysicalExtents you have:
vgdisplay
Look for the line Total PE, like this:
...
   Act PV                4
   VG Size               596.17 GB
   PE Size               4.00 MB
   Total PE              152620
   Alloc PE / Size       152600 / 596.09 GB
   Free  PE / Size       20 / 80.00 MB
  ...

I allocate slightly less than the total number to give me a little room 
in case it would help fix something later. This could be total 
fallaciousness on my part, but it's only 80M.

Now create a LogicalVolume in the VolumeGroup:
lvcreate -i 4 -l 152600

The -l 152600 sets the size of the volume as measured in 
PhysicalExtents.
The -i 4 means create a stripe set. I like to use a stripe set because 
it divides work across all the drives evenly. So if you're writing a 1 
gigabyte episode of Friends, each drive gets 250 megabytes. Little 
pieces are written to each drive in turn. If you don't use a stripe 
set, then drive sdb fills first, then drive sdc, etc. This can lead to 
sdb sitting there with your archives of Alias, while sde is busily 
recording and deleting whatever you're watching later. One drawback is 
that if a drive in a stripe set goes bad, everything is lost. The 
alternative is a RAID array, usually RAID5. However, you loose a 
certain percentage of your space to the redundant nature of the array. 
I think this is about 1 drives worth of space, but am not positive. In 
the case of 4 drives, you would loose 25%.

Your LogicalVolume will be named lvol0 for the first one, lvol1, etc.
Have a look in /dev:
ls -lh /dev/striped/*
And you should see your new "drive:"
[root at dell ~]# ls -lh /dev/striped/*
lrwxrwxrwx  1 root root 25 Feb 21 21:58 /dev/striped/lvol0 -> 
/dev/mapper/striped-lvol0

Now create a File System in this LogicalVolume. You'll want to use one 
you can resize, the ones I can think of right off hand are JFS, XFS, 
EXT3, and ReiserFS. I had problems with both JFS and Reiser. That could 
have been my own doing, but the fact that I couldn't recover says 
something about the file systems. I'm using XFS without problem.

mkfs.xfs /dev/striped/lvol0

Later if you increase the size of the LogicalVolume, you can use 
xfs_growfs to increase the size of the file system.

Now put an entry in your /etc/fstab so it will mount automatically when 
you boot:
/dev/striped/lvol0 /video xfs defaults 0 0

Finally, see if you can mount it:
mkdir /mnt/tmp
mount /dev/striped/lvol0 /mnt/tmp

If all is successful, have a look at the free space:
[root at dell ~]# df -h
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/sda2     ext3     15G  6.8G  7.0G  50% /
/dev/sda6     ext3     48G   23G   23G  51% /home
/dev/mapper/striped-lvol0 xfs  596G  596G     0 100% /video

Curses, full again :-)

Regards,
   JJ

==== Miscellaneous Notes ====
I made the following changes to /etc/lvm/lvm.conf. This filter keeps 
LVM from search everywhere for a  LogicalVolume.

/etc/lvm/lvm.conf
devices {
...
	# jtj Accept devices named sd*
	filter = [ "a/sd.*/" ]
...
}
...
global {
	...
	# jtj very lenient umask
	umask = 000
	...
	# jtj default to LVM2
	format = "lvm2"
}

While you're in /etc, have a look at 
/etc/udev/permissions.d/50-udev.permissions and check for this:
...
# disk devices
hd*:root:disk:0660
sd*:root:disk:0660
...
# v4l devices
video*:mythtv:root:0660
radio*:mythtv:root:0660
winradio*:mythtv:root:0660
vtx*:mythtv:root:0660
vbi*:mythtv:root:0660
video/*:mythtv:root:0660
vttuner:mythtv:root:0660
v4l/*:mythtv:root:0660
...
I had a problem with root:root owning the video devices after 
rebooting, and made the changes above to change ownership to 
mythtv:root.

I had a problem with the drive assignments (sdb, sdc, etc) changing if 
I moved a (USB) drive connection, so I came up with a system that works 
with udev when making the assignments. This probably only applies to 
USB drives (maybe FireWire too).

/etc/udev/rules.d/05-usb-drives.rules:

BUS="scsi", PROGRAM="/sbin/usb_scsi_serial %k", RESULT="DEF10A76238E", 
NAME="sdb", SYMLINK="lvm-a"
BUS="scsi", RESULT="DEF10A763B78", NAME="sdc", SYMLINK="lvm-b"
BUS="scsi", RESULT="DEF10A74848A", NAME="sdd", SYMLINK="lvm-c"
BUS="scsi", RESULT="DEF10A762F16", NAME="sde", SYMLINK="lvm-d"

/sbin/usb_scsi_serial:

#!/bin/sh

serial_file=`udevinfo -a -p /block/$1 | grep "\/sys\/devices\/pci" | 
cut "-d/" -f2-7`
serial=`cat /$serial_file/serial`

echo $serial



To get the serial numbers for your drives you can do this:
for i in b c d e; do
	echo sd$i;
	udevinfo -a -p /block/sd$i | \
		egrep "serial[^:]+$";
done

ps. For future reference, pvmove is /slow/. Probably 2G an hour.

On 25-Feb-2006, at 22:46, Glen Johnson wrote:

> I'm attempting to set up a myth .19 backend on a Dell Poweredge 2400
> PIII/866 box using Fedora Core 4.
...
> Now I want to move my 4 drive array storage from a windows box to my
> myth server.
...
> When I issue the mdadm --create command, it looks like it's gonna work.
> I get a message that the array is being built, it runs for less than a
> minute than I start getting all kinds of errors.  Basically the system
> loses communication with all of the drives.  Even the SCSI drive.

---
Help everyone. If you can't do that, then at least be nice.



More information about the mythtv-users mailing list