[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 04/10] xen: Introduce XEN_DOMCTL_soft_reset



Tim Deegan <tim@xxxxxxx> writes:

> At 15:11 +0200 on 28 May (1432825919), Vitaly Kuznetsov wrote:
>> Tim Deegan <tim@xxxxxxx> writes:
>> > At 13:56 +0200 on 28 May (1432821360), Vitaly Kuznetsov wrote:
>> >> Tim Deegan <tim@xxxxxxx> writes:
>> >> >> +        while ( next_page && !is_xen_heap_page(next_page) &&
>> >> >> +                page_to_mfn(next_page) == mfn + count )
>> >> >
>> >> > What's the purpose of this second loop?  It doesn't seem to be doing
>> >> > anything that the outer loop couldn't.
>> >> 
>> >> True. This second loops searches for a continuous region to preserve the
>> >> order of mappings (when possible)
>> >
>> > Ah; I think this, like the PoD case, should the more detailed p2m
>> > lookup to get the actual order of the current mapping.  That should be
>> > shorter and more readable, and probably more correct.
>> 
>> If we bring get_gfn_type_access() call to the top level it becomes
>> possible (and easy) but (if I'm not mistaken) we still need to walk
>> through all pages of the mapping checking that each of them has the
>> reqiuired count_info (so it is not mapped from other domain or xen
>> itself). In case we meet a 'bad' page we'll have to split the mapping
>> (and copy the page itself).
>
> Hmmm.  Yes, we can only reassign one page at a time.  I think that
> will look cleaner if you split out the reassign-or-copy into a
> separate function that takes a start + order and DTRT, and then having
> the loop in this function handle one p2m entry (of whatever order) per
> iteration.
>
> BTW having looked at how messy this is ending up, and how it's still
> incomplete, I'd agree with Jan that resetting the domain state might
> be a better approach.

Even with the 'reset-everything' approach the function from this patch
will still be required in some form as we'll still have to walk the p2m
and each individual page checking it's count_info and replacing in case
of need. At the same time we'll have lots of other hypervisor-related
implications (tearing down everything) so I seriously doubt it's going
to end up less messy (toolstack-related changes might go away though).

-- 
  Vitaly

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.