[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 4/9] xen: introduce XEN_DOMCTL_devour



>>> On 13.01.15 at 17:45, <vkuznets@xxxxxxxxxx> wrote:
> "Jan Beulich" <JBeulich@xxxxxxxx> writes:
> 
>>>>> On 13.01.15 at 17:17, <vkuznets@xxxxxxxxxx> wrote:
>>> Ian Campbell <Ian.Campbell@xxxxxxxxxx> writes:
>>>> An alternative approach to this might be to walk the guest p2m (with
>>>> appropriate continuations) and move each domheap page (this would also
>>>> help us preserve super page mappings). It would also have the advantage
>>>> of not needing additional stages in the destroy path and state in struct
>>>> domain etc, since all the action would be constrained to the one
>>>> hypercall.
>>> 
>>> Something like that (but not exactly) was in my RFC/WIPv2 series:
>>> http://lists.xen.org/archives/html/xen-devel/2014-09/msg03624.html 
>>> 
>>> The drawback of such approach is the necessity of copying all mapped
>>> more than once pages (granted pages, qemu-mapped pages, ...) or at least
>>> providing blank pages instead of them. 
>>
>> Why would that be necessary only in that alternative model? What
>> gets done with pages used by other than _just_ the dying domain
>> shouldn't depend on how the MFN/GFN relationship gets determined.
> 
> The difference comes from the fact that in the current model (when we
> have a hook in free_domheap_pages()) we don't care who frees the
> particular page first - our dying domain or e.g. a backend from dom0,
> eventually all pages are freed. In the alternative model (one hypercall
> reassigning all pages) we need to go through all domheap pages before
> we start domain cleanup (unless we modify the domain cleanup path) or
> some or all pages may got lost.

Which then looks like an argument against that alternative model.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.