[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 4/5] xen: introduce XENMEM_exchange_and_pin and XENMEM_unpin



>>> On 27.09.13 at 18:15, Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> 
>>> wrote:
> Changes in v6:
> - do not change error paths;
> - crash the guest on pinning failure;

Isn't that a little harsh?

> @@ -615,6 +622,21 @@ static long 
> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>              {
>                  for ( k = 0; k < (1UL << exch.out.extent_order); k++ )
>                      set_gpfn_from_mfn(mfn + k, gpfn + k);
> +            }
> +            if ( op == XENMEM_exchange_and_pin )
> +            {
> +                rc = guest_physmap_pin_range(d, gpfn, exch.out.extent_order);
> +                if ( rc )
> +                {
> +                    gdprintk(XENLOG_WARNING, "couldn't pin gpfn=%"PRIpaddr" 
> order=%u\n",

gpfn isn't paddr_t, but xen_pfn_t, and hence PRIpaddr is wrong.

> +                            gpfn, exch.out.extent_order);
> +                    rcu_unlock_domain(d);
> +                    domain_crash(d);

Shouldn't these two be in inverse order?

> +static int unpin(XEN_GUEST_HANDLE_PARAM(xen_unpin_t) arg)
> +{
> +    int rc;
> +    unsigned long i = 0;
> +    struct xen_unpin unpin;
> +    xen_pfn_t gpfn;
> +    struct domain *d = NULL;
> +
> +    if ( copy_from_guest(&unpin, arg, 1) )
> +        return -EFAULT;
> +
> +    /* Various sanity checks. */
> +    if ( /* Extent orders are sensible? */
> +         (unpin.in.extent_order > MAX_ORDER) ||
> +         /* Sizes of input list do not overflow a long? */

Too little modification to the original comments: There's just one
order and one input list here.

> +         ((~0UL >> unpin.in.extent_order) < unpin.in.nr_extents) )
> +    {
> +        rc = -EFAULT;
> +        goto fail;
> +    }
> +
> +    if ( !guest_handle_okay(unpin.in.extent_start, unpin.in.nr_extents) )
> +    {
> +        rc = -EFAULT;
> +        goto fail;
> +    }
> +
> +    d = rcu_lock_domain_by_any_id(unpin.in.domid);
> +    if ( d == NULL )
> +    {
> +        rc = -ESRCH;
> +        goto fail;
> +    }

All error paths up to here need to avoid rcu_unlock_domain().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.