[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Using lvm for domUs
Nico Kadel-Garcia <nkadel@xxxxxxxxx> writes: > Goswin von Brederlow wrote: >> If you have enough disks/partitions you can make one VG per domU and >> share that for dom0 and domU using cluster lvm (clvm) to keep them in >> sync. Instead of real disks/partitions you can run lvm in dom0, create >> a LV per domU and run lvm on it again. But then you have to >> specifically include the first lvm LVs in lvm.conf and run vgscan >> twice. I think that should work. >> > Have you tried this? For various reasons, I'm looking at using iscsi > or clvm for having commong storage and permitting live migrations, and > for snapshotting the LVM's for backup purposes. (LVM snapshots and > rsnapshot, a combination made in heaven!) I was thinking about drbd0.8 active-active with clvm but drbd0.8 caused crashes. And drbd0.7 can't do active-active. But with iscsi and clvm there should be no problem. clvm is just a little daemon that basically runs vgscan on all hosts whenever you change something. >> We use lvm inside LVs here on a HA cluster but the inside lvm is >> exclusively for the domU. No sharing with dom0. The dom0 lvm is so we >> can resize and create/destroy LVs on the fly, the inside lvm is >> because all our servers are setup the same. >> > Unfortunately for me, my Dom0 hardware does not have the CPU's for > full virtualization, so access to backup media has to go through Dom0 > or be pushed over the network. And the network or Xen usage seems to > double my backup timef, so I *need* Dom0 to be able to access the hard > drive contents of DomU. You can always create LVs in dom0 and export them as sda1/2/3/... to domU. You have to reboot domU to resize anyway. That way you can then snapshot them in dom0, mount the snapshot and run your backup. MfG Goswin. _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |