[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 4/9] xen: introduce XEN_DOMCTL_devour



On Tue, 2015-01-13 at 17:17 +0100, Vitaly Kuznetsov wrote:
> Ian Campbell <Ian.Campbell@xxxxxxxxxx> writes:
> 
> > On Thu, 2014-12-11 at 14:45 +0100, Vitaly Kuznetsov wrote:
> >> +            gmfn = mfn_to_gmfn(d, mfn);
> >
> > (I haven't thought about it super hard, but I'm taking it as given that
> > this approach to kexec is going to be needed for ARM too, since that
> > seems likely)
> >
> > mfn_to_gmfn is going to be a bit pricey on ARM, we don't have an m2p to
> > refer to, I'm not sure what we would do instead, walking the p2m looking
> > for mfns surely won't be a good idea!
> 
> Can we form a 'temporary m2p' table by walking p2m once? Our domain is
> dying and mappings don't change.

That could work.

> > An alternative approach to this might be to walk the guest p2m (with
> > appropriate continuations) and move each domheap page (this would also
> > help us preserve super page mappings). It would also have the advantage
> > of not needing additional stages in the destroy path and state in struct
> > domain etc, since all the action would be constrained to the one
> > hypercall.
> 
> Something like that (but not exactly) was in my RFC/WIPv2 series:
> http://lists.xen.org/archives/html/xen-devel/2014-09/msg03624.html
> 
> The drawback of such approach is the necessity of copying all mapped
> more than once pages (granted pages, qemu-mapped pages, ...) or at least
> providing blank pages instead of them. 

Hrm, I'd have thought that the sequencing requirements of this call
having to happen between the domain killing itself and the final destroy
would avoid that sort of thing, or else it would be, as Jan says,
independent of the mechanism for doing the m to p translation.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.