[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 0/6] save/restore on Xen

On 01/23/2012 04:47 AM, Stefano Stabellini wrote:
On Fri, 20 Jan 2012, Jan Kiszka wrote:
On 2012-01-20 18:20, Stefano Stabellini wrote:
Hi all,
this is the fourth version of the Xen save/restore patch series.
We have been discussing this issue for quite a while on #qemu and


A few different approaches were proposed to achieve the goal
of a working save/restore with upstream Qemu on Xen, however after
prototyping some of them I came up with yet another solution, that I
think leads to the best results with the less amount of code
duplications and ugliness.
Far from saying that this patch series is an example of elegance and
simplicity, but it is closer to acceptable anything else I have seen so

What's new is that Qemu is going to keep track of its own physmap on
xenstore, so that Xen can be fully aware of the changes Qemu makes to
the guest's memory map at any time.
This is all handled by Xen or Xen support in Qemu internally and can be
used to solve our save/restore framebuffer problem.

> From the Qemu common code POV, we still need to avoid saving the guest's
ram when running on Xen, and we need to avoid resetting the videoram on
restore (that is a benefit to the generic Qemu case too, because it
saves few cpu cycles).

For my understanding: Refraining from the memset is required as the
already restored vram would then be overwritten?


Or what is the ordering
of init, RAM restore, and initial device reset now?

RAM restore (done by Xen)

physmap rebuild (done by xen_hvm_init in qemu)

That's your problem. You don't want to do load_vmstate(). You just want to load the device model, not RAM.

You should have a separate load_device_state() function and mark anything that is RAM as RAM when doing savevm registration. Better yet, mark devices as devices since that's what you really care about.


Anthony Liguori

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.