[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V7 PATCH 3/7] pvh dom0: implement XENMEM_add_to_physmap_range for x86



>>> On 17.12.13 at 16:11, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>>>> On 17.12.13 at 15:40, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
>> On Tue, 2013-12-17 at 14:36 +0000, Jan Beulich wrote:
>>> >>> On 17.12.13 at 14:59, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
>>> > We could change the code but we could also tighten the interface
>>> > requirements, either by explicit specifying that the range is handled in
>>> > reverse order or by mandating that index/gpfn must not be repeated
>>> > (whether or not we actively try and detect such cases).
>>> 
>>> Specifying that this gets processed backwards would be, well,
>>> backwards. Requiring no duplicates (or else getting undefined
>>> behavior) would be possible. But processing the operation in the
>>> conventional order doesn't seem all that hard.
>> 
>> The reason I thought it would be tricky was finding somewhere to stash
>> the progress over the continuation. Do you have a cunning plan?
> 
> Just like we do in other cases - in the struct that was passed to
> us by the caller (incrementing the handles and decrementing the
> count as needed).

And I was wrong with this - there's no proper precedent of us
modifying hypercall interface structures except where certain
fields are specified to be outputs.

For XENMEM_add_to_physmap_range none of the fields is, so
even copying back the size field (like currently done on ARM,
and like also done for XENMEM_add_to_physmap's
XENMAPSPACE_gmfn_range sub-case) isn't really correct.
Instead the general method for encoding the continuation
point in mem-ops is to put the resume index in the high bits of
the first hypercall argument.

Whether we want to change the specification here (clarifying
that all of the structure may be modified by the hypervisor in
the course of executing the hypercall) instead of fixing the
implementation is open for discussion.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.