[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 10/14] libxc: get and set soft and hard affinity



On Mon, 2013-11-18 at 19:18 +0100, Dario Faggioli wrote:

> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

There are a few preexisting issues with the setaffinity function, but
this just duplicates them into the new cpumap, so I don't see any point
in holding up the series for them. Perhaps you could put them on your
todo list?

> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index f9ae4bf..bddf4e0 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -192,44 +192,52 @@ int xc_domain_node_getaffinity(xc_interface *xch,
>  int xc_vcpu_setaffinity(xc_interface *xch,
>                          uint32_t domid,
>                          int vcpu,
> -                        xc_cpumap_t cpumap)
> +                        xc_cpumap_t cpumap,
> +                        uint32_t flags,
> +                        xc_cpumap_t ecpumap_out)
>  {
>      DECLARE_DOMCTL;
> -    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> +    DECLARE_HYPERCALL_BUFFER(uint8_t, cpumap_local);
> +    DECLARE_HYPERCALL_BUFFER(uint8_t, ecpumap_local);
>      int ret = -1;
>      int cpusize;
>  
>      cpusize = xc_get_cpumap_size(xch);
> -    if (!cpusize)
> +    if ( !cpusize )
>      {
>          PERROR("Could not get number of cpus");
> -        goto out;
> +        return -1;;

Double ";;"?

>      }
>  
> -    local = xc_hypercall_buffer_alloc(xch, local, cpusize);
> -    if ( local == NULL )
> +    cpumap_local = xc_hypercall_buffer_alloc(xch, cpumap_local, cpusize);
> +    ecpumap_local = xc_hypercall_buffer_alloc(xch, ecpumap_local, cpusize);
> +    if ( cpumap_local == NULL || cpumap_local == NULL)
>      {
> -        PERROR("Could not allocate memory for setvcpuaffinity domctl 
> hypercall");
> +        PERROR("Could not allocate hcall buffers for 
> DOMCTL_setvcpuaffinity");
>          goto out;
>      }
>  
>      domctl.cmd = XEN_DOMCTL_setvcpuaffinity;
>      domctl.domain = (domid_t)domid;
>      domctl.u.vcpuaffinity.vcpu = vcpu;
> -    /* Soft affinity is there, but not used anywhere for now, so... */
> -    domctl.u.vcpuaffinity.flags = XEN_VCPUAFFINITY_HARD;
> -
> -    memcpy(local, cpumap, cpusize);
> -
> -    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
> +    domctl.u.vcpuaffinity.flags = flags;
>  
> +    memcpy(cpumap_local, cpumap, cpusize);

This risks running of the end of the supplies cpumap, if it is smaller
than cpusize.

But more importantly why is this not using the hypercall buffer bounce
mechanism?

> +    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, cpumap_local);
>      domctl.u.vcpuaffinity.cpumap.nr_bits = cpusize * 8;
>  
> +    set_xen_guest_handle(domctl.u.vcpuaffinity.eff_cpumap.bitmap,
> +                         ecpumap_local);
> +    domctl.u.vcpuaffinity.eff_cpumap.nr_bits = cpusize * 8;
> +
>      ret = do_domctl(xch, &domctl);
>  
> -    xc_hypercall_buffer_free(xch, local);
> +    if ( ecpumap_out != NULL )
> +        memcpy(ecpumap_out, ecpumap_local, cpusize);

Likewise this risks overrunning ecpumap_out, doesn't it?

>   out:
> +    xc_hypercall_buffer_free(xch, cpumap_local);
> +    xc_hypercall_buffer_free(xch, ecpumap_local);
>      return ret;
>  }
>  




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.