[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] Snapshotting LVM backed guests from dom0
Just looking for some feedback from other people who do this. I know its not a good "backup" method but "crash consistent" images have been very useful for me in disaster situations just to get OS running quickly then restore data from a data backup. My typical setup is to put the LV in snapshot mode while guest is running then dd the data to a backup file which is on a NFS mount point. The thing that seems to be happening is that the VM's performance gets pretty poor during the time the copy is happening. My guesses at why this was happening were: 1. dom0 having equal weight to the other 4 guests on the box and somehow hogging cpu time 2. lack of QoS on the IO side / dom0 hogging IO 3. process priorities in dom0 4. NFS overhead For each of these items I tried to adjust things to see if it improved. 1. Tried increasing dom0 weight to 4x the other VM's. 2. Saw pasi mentioning dm-ioband a few times and think this might address IO scheduling but haven't tried it yet. 3. Tried nice-ing the dd to lowest priority and qemu-dm to highest 4. Changing destination to a local Changing the things above didn't really seem to help either alone or in combination. My setup is Xen 3.2 and Xen 4.0 on dual nehalem processors, 24GB RAM, RAID 5+0 of WD RE3 1TB disks. The hardware in the boxes is quite good and there seems to be no noticable difference between Xen versions. What I'd ideally like to accomplish is to be able to take the backups with the least possible impact on the running VM's as possible. I honestly don't care how long the backups take but I want to avoid just slowing them down to a fixed speed, because it seems inefficient/hacky. Can anyone share their experiences both good and bad? Thanks, - chris _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |