[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/3] libxc: do some retries in xc_cpupool_removecpu() for EBUSY case



On Tue, Mar 01, 2016 at 10:02:13AM +0100, Juergen Gross wrote:
> The hypervisor might return EBUSY when trying to remove a cpu from a
> cpupool when a domain running in this cpupool has pinned a vcpu
> temporarily. Do some retries in this case, perhaps the situation
> cleans up.
> 
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> ---
>  tools/libxc/xc_cpupool.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/libxc/xc_cpupool.c b/tools/libxc/xc_cpupool.c
> index c42273e..9f2f95c 100644
> --- a/tools/libxc/xc_cpupool.c
> +++ b/tools/libxc/xc_cpupool.c
> @@ -20,8 +20,11 @@
>   */
>  
>  #include <stdarg.h>
> +#include <unistd.h>
>  #include "xc_private.h"
>  
> +#define LIBXC_BUSY_RETRIES 5
> +
>  static int do_sysctl_save(xc_interface *xch, struct xen_sysctl *sysctl)
>  {
>      int ret;
> @@ -141,13 +144,21 @@ int xc_cpupool_removecpu(xc_interface *xch,
>                           uint32_t poolid,
>                           int cpu)
>  {
> +    unsigned retries;
> +    int err;
>      DECLARE_SYSCTL;
>  
>      sysctl.cmd = XEN_SYSCTL_cpupool_op;
>      sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_RMCPU;
>      sysctl.u.cpupool_op.cpupool_id = poolid;
>      sysctl.u.cpupool_op.cpu = (cpu < 0) ? XEN_SYSCTL_CPUPOOL_PAR_ANY : cpu;
> -    return do_sysctl_save(xch, &sysctl);
> +    for (retries = 0; retries < LIBXC_BUSY_RETRIES; retries++) {

Libxc coding style requires spaces between ().

> +        err = do_sysctl_save(xch, &sysctl);
> +        if (err >= 0 || errno != EBUSY)

Ditto.

> +            break;
> +        sleep(1);
> +    }
> +    return err;
>  }
>  
>  int xc_cpupool_movedomain(xc_interface *xch,
> -- 
> 2.6.2
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.