[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] snapshots on xen
Hi Pasi, Am Montag, den 12.04.2010, 10:39 +0300 schrieb Pasi KÃrkkÃinen: > On Mon, Apr 12, 2010 at 09:31:59AM +0200, Thomas Halinka wrote: > > Hi Pasi, > > > > Am Montag, den 12.04.2010, 09:20 +0300 schrieb Pasi KÃrkkÃinen: > > > On Sat, Apr 10, 2010 at 08:58:59AM -0400, James Pifer wrote: > > > > On Sat, 2010-04-10 at 13:55 +0300, Pasi KÃrkkÃinen wrote: > > > > > On Fri, Apr 09, 2010 at 08:57:16PM -0400, James Pifer wrote: > > > > > > Hi all. I'm using sles11 for my xen servers. I would really like to > > > > > > be > > > > > > able to do hot snapshots of disk AND memory, and then have these > > > > > > snapshots backed up to tape or off site for disaster recovery. I'm > > > > > > thinking this would be done weekly or monthly, not nightly. > > > > > > > > > > > > Two questions: > > > > > > > > > > > > 1) Is this even possible? > > > > > > > > > > > > > > > > 1) xm save <guest> <guest>.save > > > > > 2) save a copy of the guest disks > > > > > 3) save a copy of the <guest>.save file > > > > > 4) xm restore <guest>.save > > > > > > > > > > > .......... > > > When the guest is saved/stopped, you can take a backup of the disks, > > > and store the backup with the state/save-file. > > > > how does this behave with blktap2? i read that blktap2 passes all disk > > I/O-requests from VMs to the userspace deamon through a character > > device. > > > > As i read the release-notes correctly we should get a consistent FS, > > without "xm save" just through using blktap2/vhd? > > > > Depends what you mean with 'consistent'. consistent means, that all buffer-caches and IO-queues were written/flushed to disks > > If you want to do a disk snapshot online then you always need to coordinate > the snapshot with the guest OS/kernel/apps - the guest needs to have > the apps in a consistent state and all the buffers flushed when you take the > disk snapshot. Yeah, but i understood, that xen 4.0 implemented "some magic" around this topic. > > Windows provides VSS framework for this, but there's nothing general in Linux > for this. > And also you need to coordinate that stuff with the snapshot, have the timing > correct. > > So, even if you used blktap2/vhd, you'd have to trigger and coordinate the > 'prepare apps and flush caches' > in the guest to happen at the correct time for the disk snapshot to be > consistent. The XEN-Datasheet (http://www.xen.org/files/Xen_4_0_Datasheet.pdf) says: .... Blktap2 A new virtual hard disk (VHD) implementation delivers high performance VM snapshots and cloning features as well as the ability to do live virtual disk snapshots without stopping a VM process. So i thought it just works, eg through some kernel-hacking in pvops, or whatever. > XenServer/XCP has method for this, through the Citrix windows PV drivers. > So yeah.. blktap2 is just a part of the solution. You need more to actually > do it properly. Hmm, ok - just found it in the docs, too .. 114 Snapshots: 115 116 Pausing a guest will also plug the corresponding IO queue for blktap2 117 devices and stop blktap2 drivers. This can be used to implement a 118 safe live snapshot of qcow and vhd disks. An example script "xmsnap" 119 is shown in the tools/blktap2/drivers directory. This script will 120 perform a live snapshot of a qcow disk. VHD files can use the 121 "vhd-util snapshot" tool discussed above. If this snapshot command is 122 applied to a raw file mounted with tap:tapdisk:AIO, include the -m 123 flag and the driver will be reloaded as VHD. If applied to an already 124 mounted VHD file, omit the -m flag. 125 So my next question is: blktap2/vhd seems great to do snapshots and clones and will have future-support for thin-provisionig (like pre-allocation), but are there any advantages over lvm at the moment? > So "xm save" method is easier.. > > -- Pasi > thanks, thomas _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |