[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] gross qemu behavior

Il 04/04/2014 18:54, Stefano Stabellini ha scritto:
On Fri, 4 Apr 2014, Jan Beulich wrote:
On 04.04.14 at 17:32, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
On Fri, 4 Apr 2014, Jan Beulich wrote:
On 04.04.14 at 15:53, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
On Fri, 4 Apr 2014, Jan Beulich wrote:
No, it should just never appear at the wrong address. As I said above,
I'd consider it halfway acceptable if it remained mapped despite an
unmap, but I do think that a proper solution (properly unmapping
without de-allocating) can and should be found.
There is no way to do it today AFAICT.
Would you care of proposing an hypercall that would support this
Hypercall? Everything you need is there afaict.
Maybe I am missing something.

Moving the PCI ROM to the right place in the guest physmap is easy.
However how do you think we could unmap the memory (remove it from the
guest physmap) without deallocating it?
The only hypercall we have is xc_domain_add_to_physmap at the moment.
XENMEM_remove_from_physmap should be quite suitable here, but
that is not your problem. The problem is that XENMEM_add_to_physmap
requires a GFN to be passed in, i.e. assumes that a page to be mapped
is already mapped somewhere in the guest. (The term "add" in this
context is rather confusing, as in the GMFN map space case the page
isn't being added, but moved.) So indeed there is functionality missing
in the hypervisor.

After we call XENMEM_remove_from_physmap, there is no way of adding back
the pages to the physmap without knowing the corresponding mfns.  Even
if we knew the mfns of the original allocation, relying on them is
probably not a good idea because the pages could theoretically be
offlined or shared.

This is the reason why I was asking for the hypercall you had in mind to
solve this problem.

Any news about this discussion?

Thanks for any reply.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.