[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] Yet another backup proposal (file based vbd's & lvm)
> >From what I've gathered in the list archives and google, LVM snapshots > are frowned upon, rdiff-backup seems to get decent remarks, and > dd-like backups are too slow and space consuming. ?? LVM frowned upon ?? Can you clarify this - we use LVM here with Xen-3.0.2 and it works well. There are a lot of notes and discussions of using LVM as a block device for the DomU's. AFAIK, Xen+LVM is pretty widely used. > > What I'd like to attempt to setup (based on the feedback I receive) is > the following. > > 1. Pause the domU using "xm pause" > 2. Sync the domU using "xm sysrq" > 3. Use rdiff-backup to make a local backup of the file VBD > 4. Resume the domU using "xm unpause" > If you don't exclude LVM and instead use it to provide LVs as VBDs for the DomUs, then an alternative sequence of events would allow you to resume the DomU a lot faster: 1. Pause the domU using "xm pause" 2. Sync the domU using "xm sysrq" 3. Use LVM to snapshot to LV providing the VBD to the DomU 4. Resume the domU using "xm unpause" 5. Use rdiff-backup to make a local backup of the snapshot LV 6. Remove the snapshot Moving the rdiff-backup from stage 3 to stage 5 will allow you to reduce the time between the pause and unpause actions. Running rdiff-backup on (for example) a 10GB block device image will still take some time even on a fast system (e.g. with 200MB/sec disk access, just scanning the block device for data changes will take more than 100 seconds, 50 secs to read the old backup image and 50 secs to read the new file VBD), so you probably wouldn't want to have to freeze a running server for this long. In comparison, taking a snapshot of a LV should take less than a second, after which you can resume the server again. BR, Roger _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |