[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 5/5] vga-cirrus: Workaround during restore when using Xen.

On Thu, 5 Jan 2012, Jan Kiszka wrote:
> On 2012-01-05 15:50, Avi Kivity wrote:
> >> Let me summarize what we have come up with so far:
> >>
> >> - we move the call to xen_register_framebuffer before
> >> memory_region_init_ram in vga_common_init;
> >>
> >> - we prevent xen_ram_alloc from allocating a second framebuffer on
> >> restore, checking for mr == framebuffer;
> >>
> >> - we avoid memsetting the framebuffer on restore (useless anyway, the
> >> videoram is going to be loaded from the savefile in the general case);
> >>
> >> - we introduce a xen_address field to cirrus_vmstate and we use it
> >> to map the videoram into qemu's address space and update vram_ptr in
> >> cirrus_post_load.
> >>
> >>
> >> Does it make sense? Do you agree with the approach?
> > 
> > Yes and yes.
> To me this still sounds like a cirrus-only xen workaround that
> nevertheless spreads widely.

The first two points are still necessary even if we go with the
early_savevm option;
the third point is a good change regardless of the qemu accelerator;
the only cirrus-only workaround is the last point, however it certainly
doesn't spread widely. It is true that other framebuffer based devices
would need a similar change.

> Again, what speaks against migrating the information Xen needs before
> creating the machine or a single device? That would only introduce a
> generic concept of an (optional) "early", let's call it
> "accelerator-related" vmstate and would allow Xen to deal with all the
> specifics behind the curtain.

I am OK with both approaches.
The only practical difference between the two is how we pass the xen
framebuffer address across save/restore: either in the device specific
savevm record or in a xen specific early_savevm record.
What is it going to be?

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.