[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Domain Save Image Format proposal (draft B)
On Tue, Feb 11, 2014 at 5:58 AM, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
Totally agree. But as someone else put it (and you did as well), my point was that its sufficient to specify it once, somewhere in the image header and making sure that (as you put it below), that the current use cases don't have to go through
needless endian conversion. However, I do think it can be specified in such a way that all the Yep. I was mistaken.
My suggestion is that when saving the image to disk, why not have a single image-wide checksum to ensure that the image from disk being restored is still valid? > PAGE_DATA What record flipping? For page compression, Remus basically has a simple XOR+RLE encoded sequence of bytes, preceded by a 4-byte length field.
Instead of sending the usual 4K per-page page_data, this compressed chunk is sent. The additional code on the remote side is an additional "if" block, that uses xc_uncompess instead of memcpy to get the uncompressed page.
It would not change the way the PAGE_DATA record would be transmitted. Though, one potentially cooler addition could be to use the option field of the record header
to indicate whether the data is compressed or not. Given that we have 64 bits, we could even go as far as specifying the type of compression module used (e.g., none, remus, gzip, etc.). This might be really helpful when one wants to save/restore large images (a 8GB VM for example) to/from disks. Is this better/worse than simply gzipping the entire saved image? I don't know yet. However, for live migration, this would be pretty helpful (especially when migrating over long latency
networks). Remus' compression technique cannot be used for live migration as it requires a previous version of pages for XOR+RLE compression. However, gzip and other such compression algorithms
would be pretty handy in the live migration case, over WAN or even a clogged LAN, where there are tons of VMs being moved back and forth. Feel free to shoot down this idea if it seems unfeasible.
Thanks shriram _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |