[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 01/10] xen: vnuma topology and subop hypercalls



On Fri, Jul 18, 2014 at 01:50:00AM -0400, Elena Ufimtseva wrote:
[...]
> +/*
> + * Allocate memory and construct one vNUMA node,
> + * set default parameters, assign all memory and
> + * vcpus to this node, set distance to 10.
> + */
> +static long vnuma_fallback(const struct domain *d,
> +                          struct vnuma_info **vnuma)
> +{
> +    struct vnuma_info *v;
> +    long ret;
> +
> +
> +    /* Will not destroy vNUMA here, destroy before calling this. */
> +    if ( vnuma && *vnuma )
> +        return -EINVAL;
> +
> +    v = *vnuma;
> +    ret = vnuma_alloc(&v, 1, d->max_vcpus, 1);
> +    if ( ret )
> +        return ret;
> +
> +    v->vmemrange[0].start = 0;
> +    v->vmemrange[0].end = d->max_pages << PAGE_SHIFT;
> +    v->vdistance[0] = 10;
> +    v->vnode_to_pnode[0] = NUMA_NO_NODE;
> +    memset(v->vcpu_to_vnode, 0, d->max_vcpus);
> +    v->nr_vnodes = 1;
> +
> +    *vnuma = v;
> +
> +    return 0;
> +}
> +

I have question about this strategy. Is there any reason to choose to
fallback to this one node? In that case the toolstack will have
different view of the guest than the hypervisor. Toolstack still thinks
this guest has several nodes while this guest has only one. The can
cause problem when migrating a guest. Consider this, toolstack on the
remote end still builds two nodes given the fact that it's what it
knows, then the guest originally has one node notices the change in
underlying memory topology and crashes.

IMHO we should just fail in this case. It's not that common to fail a
small array allocation anyway. This approach can also save you from
writing this function. :-)

> +/*
> + * construct vNUMA topology form u_vnuma struct and return
> + * it in dst.
> + */
[...]
> +
> +    /* On failure, set only one vNUMA node and its success. */
> +    ret = 0;
> +
> +    if ( copy_from_guest(v->vdistance, u_vnuma->vdistance, dist_size) )
> +        goto vnuma_onenode;
> +    if ( copy_from_guest(v->vmemrange, u_vnuma->vmemrange, nr_vnodes) )
> +        goto vnuma_onenode;
> +    if ( copy_from_guest(v->vcpu_to_vnode, u_vnuma->vcpu_to_vnode,
> +        d->max_vcpus) )
> +        goto vnuma_onenode;
> +    if ( copy_from_guest(v->vnode_to_pnode, u_vnuma->vnode_to_pnode,
> +        nr_vnodes) )
> +        goto vnuma_onenode;
> +
> +    v->nr_vnodes = nr_vnodes;
> +    *dst = v;
> +
> +    return ret;
> +
> +vnuma_onenode:
> +    vnuma_destroy(v);
> +    return vnuma_fallback(d, dst);
> +}
> +
>  long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
>      long ret = 0;
> @@ -967,6 +1105,35 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
> u_domctl)
>      }
>      break;
>  
[...]
> +/*
> + * vNUMA topology specifies vNUMA node number, distance table,
> + * memory ranges and vcpu mapping provided for guests.
> + * XENMEM_get_vnumainfo hypercall expects to see from guest
> + * nr_vnodes and nr_vcpus to indicate available memory. After
> + * filling guests structures, nr_vnodes and nr_vcpus copied
> + * back to guest.
> + */
> +struct vnuma_topology_info {
> +    /* IN */
> +    domid_t domid;
> +    /* IN/OUT */
> +    unsigned int nr_vnodes;
> +    unsigned int nr_vcpus;
> +    /* OUT */
> +    union {
> +        XEN_GUEST_HANDLE(uint) h;
> +        uint64_t pad;
> +    } vdistance;
> +    union {
> +        XEN_GUEST_HANDLE(uint) h;
> +        uint64_t pad;
> +    } vcpu_to_vnode;
> +    union {
> +        XEN_GUEST_HANDLE(vmemrange_t) h;
> +        uint64_t pad;
> +    } vmemrange;

Why do you need to use union? The other interface you introduce in this
patch doesn't use union.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.