[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/7] xen: vNUMA support for guests.



On 11/14/2013 10:51 PM, Elena Ufimtseva wrote:
On Thu, Nov 14, 2013 at 4:59 PM, George Dunlap
<george.dunlap@xxxxxxxxxxxxx> wrote:
On 11/14/2013 03:26 AM, Elena Ufimtseva wrote:

Defines interface, structures and hypercalls for guests that wish
to retreive vNUMA topology from Xen.
Two subop hypercalls introduced by patch:
XEN_DOMCTL_setvnumainfo to define vNUMA domain topology per domain
and XENMEM_get_vnuma_info to retreive that topology by guest.

Signed-off-by: Elena Ufimtseva <ufimtseva@xxxxxxxxx>


Thanks, Elena -- this looks like it's much closer to being ready to be
checked in.  A few comments:

Hi George, good to hear )

@@ -871,6 +872,87 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t)
u_domctl)
       }
       break;

+    case XEN_DOMCTL_setvnumainfo:
+    {
+        unsigned int i = 0, dist_size;
+        uint nr_vnodes;
+        ret = -EFAULT;
+
+        /* Already set? */
+        if ( d->vnuma.nr_vnodes > 0 )
+            return 0;
+
+        nr_vnodes = op->u.vnuma.nr_vnodes;
+
+        if ( nr_vnodes == 0 )
+            return ret;
+        if ( nr_vnodes * nr_vnodes > UINT_MAX )
+            return ret;
+
+        /*
+         * If null, vnode_numamap will set default to
+         * point to allocation mechanism to dont use
+         * per physical node allocation or this is for
+         * cases when there is no physical NUMA.
+         */
+        if ( guest_handle_is_null(op->u.vnuma.vdistance) ||
+             guest_handle_is_null(op->u.vnuma.vmemrange) ||
+             guest_handle_is_null(op->u.vnuma.vcpu_to_vnode) )
+            goto err_dom;
+
+        dist_size = nr_vnodes * nr_vnodes;
+
+        d->vnuma.vdistance = xmalloc_array(unsigned int, dist_size);
+        d->vnuma.vmemrange = xmalloc_array(vmemrange_t, nr_vnodes);
+        d->vnuma.vcpu_to_vnode = xmalloc_array(unsigned int,
d->max_vcpus);
+        d->vnuma.vnode_numamap = xmalloc_array(unsigned int, nr_vnodes);


Is there a risk here that you'll leak memory if a buggy toolstack calls
setvnuma_info multiple times?

In order to run through xmalloc again,  d->vnuma.nr_vnodes should be
equal to zero.
The only way to set it to zero (for domain) is to exit from this call
with error (wich sets it to zero and releases memory) or to call
vnuma_destroy for domain. Zero as a new value for nr_vnodes is not
accepted so it cant be set by issuing do_domctl.

Oh, right -- sorry, I missed that. In that case, can I suggest changing that comment to, "Don't initialize again if we've already been initialized once" or something like that. You should probably add "ASSERT(d->vnuma.$FOO==NULL);" as well, as a precaution (and as a hint to people who are reading the code too quickly).

Also, that should return an error (-EBUSY? -EINVAL?), rather than returning success.




diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index d4e479f..da458d3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -35,6 +35,7 @@
   #include "xen.h"
   #include "grant_table.h"
   #include "hvm/save.h"
+#include "vnuma.h"

   #define XEN_DOMCTL_INTERFACE_VERSION 0x00000009

@@ -863,6 +864,27 @@ struct xen_domctl_set_max_evtchn {
   typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
   DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);

+/*
+ * XEN_DOMCTL_setvnumainfo: sets the vNUMA topology
+ * parameters a guest may request.
+ */
+struct xen_domctl_vnuma {
+    uint32_t nr_vnodes;
+    uint32_t __pad;
+    XEN_GUEST_HANDLE_64(uint) vdistance;
+    XEN_GUEST_HANDLE_64(uint) vcpu_to_vnode;
+    /* domain memory mapping map to physical NUMA nodes */
+    XEN_GUEST_HANDLE_64(uint) vnode_numamap;


What is this for?  We don't pass this to the guest or seem to use it in any
way.

This is used to set domain vnode to pnode mapping, do allocations
later and keep it on per-domain
basis for later use. For example, as vnuma aware ballooning will
request this information.

Oh, right -- we wanted the HV to be able to translate the vnuma ballooning requests from vnode to pnode without the guest knowing about it. Sorry, forgot about that.

It's probably worth noting that, at least in the changelog, and probably in a comment somewhere.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.