[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] lomount + lvm
John, Worked like a charm, you rock! How on Gods green Earth did you figure this out? I woulda been stuck if it weren't for you, but now I am stocked. - Brian On Feb 6, 2009, at 7:56 AM, John Haxby wrote: Brian Krusic wrote:Forgot, my conf files disk line of interest looks like this; disk = [ "tap:aio:/var/lib/xen/images/foo.img,xvda,w" ]First make sure your guest isn't running unless you want to trash its file systems.losetup -f /var/lib/xen/images/foo.img losetup -a # Make a note of which device corresponds to /var/lib/images/foo.img, # 'll call it /dev/loopN but it's probably /dev/loop0 kpartx -va /dev/loopNYou'll get two new entries in /dev/mapper now: /dev/mapper/loopNp1 and /dev/mapper/loopNp2. loopNp1 is /boot (asume you have got a standard layout). loopNp2 is a volume group. You can just mount/dev/loopNp1 to poke around the /boot file system. Now vgscanThis is where you might come unstuck. The default volume group for Red Hat and similar is "VolGroup00". If your dom0 is using LVM and so is the guest then you'll have do VolGroup00's and that's bad. The best thing to do now is to boot a rescue image in a different domU and rename the guest's volume group. You'll need to undo the kpartx and losetup (see below first) and when you've all finished then you'll need to either fix up the guest's /boot/initrd*.img, / etc/fstab and /boot/grub/grub.conf to hold the new name or you'll have to rename it back again in the rescue guest.Anyway, assuming you don't get a clash: vgchange -ay VolGroup00The guest's file systems are now in /dev/VolGroup00 and you can mount them as normal.To undo everything: 1. umount any file systyems you mounted 2. vgchange -an VolGroup00 3. kpartx -d /dev/loopN 4. losetup -d /dev/loopNAnd next time you build a system, change the name of its volume group so you don't wind up with two systems with the same volume group name! And I wish Red Hat had listened to me years ago when I said that "VolGroup00" was a really poor idea.jch _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |