[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 11/16] libxc: get and set soft and hard affinity



Dario Faggioli writes ("[PATCH v2 11/16] libxc: get and set soft and hard 
affinity"):
> by using the new flag introduced in the parameters of
> DOMCTL_{get,set}_vcpuaffinity.
...
> -int xc_vcpu_setaffinity(xc_interface *xch,
> -                        uint32_t domid,
> -                        int vcpu,
> -                        xc_cpumap_t cpumap)
> +static int _vcpu_setaffinity(xc_interface *xch,
> +                                uint32_t domid,
> +                                int vcpu,
> +                                xc_cpumap_t cpumap,
> +                                uint32_t flags,
> +                                xc_cpumap_t ecpumap)

It would be clearer if the ecpumap parameter was clearly marked as an
out parameter.

Even better if the in parameter was const-correct.  However, defining
the xc_cpumap_t to be a uint8_t* has made that difficult.  We don't
want to introduce a new xc_cpumap_const_t.  Perhaps we should have
typedef uint8_t xc_cpumap_element.

(NB that identifiers ending *_t are reserved for the C
implementation.  libxc gets this wrong all the time :-/.)

> @@ -206,39 +209,119 @@ int xc_vcpu_setaffinity(xc_interface *xch,
>          goto out;
>      }
>  
> -    local = xc_hypercall_buffer_alloc(xch, local, cpusize);
> -    if ( local == NULL )
> +    cpumap_local = xc_hypercall_buffer_alloc(xch, cpumap_local, cpusize);

Is xc_hypercall_buffer_free idempotent, and is there a way to init a
hypercall buffer to an unallocated state ?  If so this function could
be a lot simpler, and it could in particular more clearly not leak
anything, by using the "goto out" cleanup style.

> +/**
> + * This functions specify the scheduling affinity for a vcpu. Soft
> + * affinity is on what pcpus a vcpu prefers to run. Hard affinity is
> + * on what pcpus a vcpu is allowed to run. When set independently (by
> + * the respective _soft and _hard calls) the effective affinity is
> + * also returned. What we call the effective affinity it the intersection
> + * of soft affinity, hard affinity and the set of the cpus of the cpupool
> + * the domain belongs to. It's basically what the Xen scheduler will
> + * actually use. Returning it to the caller allows him to check if that
> + * matches with, or at least is good enough for, his purposes.
> + *
> + * A xc_vcpu_setaffinity() call is provided, mainly for backward
> + * compatibility reasons, and what it does is setting both hard and
> + * soft affinity for the vcpu.
> + *
> + * @param xch a handle to an open hypervisor interface.
> + * @param domid the id of the domain to which the vcpu belongs
> + * @param vcpu the vcpu id wihin the domain
> + * @param cpumap the (hard, soft, both) new affinity map one wants to set
> + * @param ecpumap the effective affinity for the vcpu

Either the doc comment, or the parameter name, should make it clear
that ecpumap is an out parameter.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.