[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/4] move XENMEM_add_to_physmap handling framework to common code



On Fri, 2013-12-20 at 07:48 +0000, Jan Beulich wrote:
> >>> On 18.12.13 at 18:52, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> > On Wed, 2013-12-18 at 14:34 +0000, Jan Beulich wrote:
> >> There's really nothing really architecture specific here; the
> >> architecture specific handling is limited to
> >> xenmem_add_to_physmap_one().
> >> 
> >> Note that ARM's restriction of XENMAPSPACE_gmfn_foreign only being
> >> supported through XENMEM_add_to_physmap_range is being dropped as being
> >> pointless.
> > 
> > It's not pointless, dropping this restriction turns a comprehensible
> > EINVAL result into an incorrect ESRCH result. Which is incorrect because
> > no domain was specified so how can it not exist.
> 
> How can there have been no domain specified for a
> XENMAPSPACE_gmfn_foreign?

The arguments to XENMEM_add_to_physmap have no field which specifies a
foreign domain. (this is one of the reasons we added the _range version
with the field in the first place).

The domid parameter to add_to_physmap one is why hardcoded to
DOMID_INVALID in all the add_to_physmap (!range) cases.

>  If the field was left uninitialized,
> -ESRCH would be as valid as -EINVAL, and I don't think we
> specify for any hypercall that between multiple applicable
> error code a specific one would be preferred over others.

I think given the above -ESRCH is invalid, since there is no field to
have been left uninitialised.

> > I'd also rather avoid introducing unnecessary/legacy hypercalls to ARM
> > if possible.
> 
> I can understand that, but please also take the perspective that
> moving the architecture independent bits out should have been
> done much earlier, and (see below) adding a conditional here
> doesn't seem too attractive to me.

I don't think this should be a per arch conditional, if that's what you
mean, just a blanket if ( ) return ENOSYS.

> >> +static int xenmem_add_to_physmap(struct domain *d,
> >> +                                 struct xen_add_to_physmap *xatp)
> >> +{
> >> +    struct xen_add_to_physmap start_xatp;
> >> +    int rc = 0;
> >> +
> >> +    if ( xatp->space != XENMAPSPACE_gmfn_range )
> >> +        return xenmem_add_to_physmap_one(d, xatp->space, DOMID_INVALID,
> >> +                                         xatp->idx, xatp->gpfn);
> >> +
> >> +#ifdef HAS_PASSTHROUGH
> >> +    if ( need_iommu(d) )
> >> +        this_cpu(iommu_dont_flush_iotlb) = 1;
> >> +#endif
> >> +
> >> +    start_xatp = *xatp;
> >> +    while ( xatp->size > 0 )
> >> +    {
> >> +        rc = xenmem_add_to_physmap_one(d, xatp->space, DOMID_INVALID,
> >> +                                       xatp->idx, xatp->gpfn);
> > 
> > ARM's xenmem_add_to_physmap_one does not support XENMAPSPACE_gmfn_range,
> > so it will return ENOSYS and fail out. I suppose that is ok. But it
> > makes the bulk of this function pointless. 
> > 
> > How about pulling the != XENMAPSPACE_gmfn_range check into do_memory_op
> > either calling xenmem_add_to_phymap_one or a per arch gmfn_range
> > function?
> 
> No, I think having just one arch specific function dealing with
> everything, and having the common code really take care of
> as much as is common is the right thing (and considering the
> review by Tim and the ack from Keir, I think others agree).
> 
> Which would leave the option of adding a #ifdef CONFIG_ARM
> (or #ifndef CONFIG_X86) in the common code - not really nice.

(note that this is a different case to the above gmfn_range here vs
gmfn_foreign there)

The #ifndef would be the better of those two, since I'd expect no other
port to be added with this legacy interface. Another option would be
HAVE_FOO.

But in any case leaving this is not too harmful since the end result is
correct -ENOSYS anyway.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.