[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v8][PATCH 02/17] introduce XEN_DOMCTL_set_rdm



On 2014/12/2 16:33, Tian, Kevin wrote:
From: Chen, Tiejun
Sent: Monday, December 01, 2014 5:24 PM

This should be based on a new parameter globally, 'pci_rdmforce'.

pci_rdmforce = 1 => Of course this should be 0 by default.

'1' means we should force check to reserve all ranges. If failed
VM wouldn't be created successfully. This also can give user a
chance to work well with later hotplug, even if not a device
assignment while creating VM.

But we can override that by one specific pci device:

pci = ['AA:BB.CC,rdmforce=0/1]

But this 'rdmforce' should be 1 by default since obviously any
passthrough device always need to do this. Actually no one really
want to set as '0' so it may be unnecessary but I'd like to leave
this as a potential approach.

since no one requires it, why bother adding it? better to just
keep global option.

This originates from my preliminary thought. Here I hope we can extend this approach to enable this feature just for any specific device in hotplug case.

But yes, this definitely isn't mandatory now since we can take this in next step so I can remove this right now, and of course I assume there's no any objection from other guys.



So this domctl provides an approach to control how to populate
reserved device memory by tools.

Note we always post a message to user about this once we owns
RMRR.

Signed-off-by: Tiejun Chen <tiejun.chen@xxxxxxxxx>
---

[snip]

--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -90,6 +90,53 @@ const char *libxl__domain_device_model(libxl__gc *gc,
      return dm;
  }

+int libxl__domain_device_setrdm(libxl__gc *gc,
+                                libxl_domain_config *d_config,
+                                uint32_t dm_domid)
+{
+    int i, ret;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    struct xen_guest_pcidev_info *pcidevs = NULL;
+    uint32_t rdmforce = 0;
+
+    if ( d_config->num_pcidevs )
+    {
+        pcidevs =
malloc(d_config->num_pcidevs*sizeof(xen_guest_pcidev_info_t));
+        if ( pcidevs )
+        {
+            for (i = 0; i < d_config->num_pcidevs; i++)
+            {
+                pcidevs[i].devfn = PCI_DEVFN(d_config->pcidevs[i].dev,
+
d_config->pcidevs[i].func);
+                pcidevs[i].bus = d_config->pcidevs[i].bus;
+                pcidevs[i].seg = d_config->pcidevs[i].domain;
+                pcidevs[i].flags = d_config->pcidevs[i].rdmforce &
+                                   PCI_DEV_RDM_CHECK;
+            }
+        }
+        else
+        {
+            LIBXL__LOG(CTX, LIBXL__LOG_ERROR,
+                               "Can't allocate for pcidevs.");
+            return -1;
+        }
+    }
+    rdmforce = libxl_defbool_val(d_config->b_info.rdmforce) ? 1 : 0;
+
+    /* Nothing to do. */
+    if ( !rdmforce && !d_config->num_pcidevs )
+        return 0;

move check before creating pcidevs.


Okay,

@@ -99,40 +99,33 @@ int libxl__domain_device_setrdm(libxl__gc *gc,
     struct xen_guest_pcidev_info *pcidevs = NULL;
     uint32_t rdmforce = 0;

-    if ( d_config->num_pcidevs )
-    {
- pcidevs = malloc(d_config->num_pcidevs*sizeof(xen_guest_pcidev_info_t));
-        if ( pcidevs )
-        {
-            for (i = 0; i < d_config->num_pcidevs; i++)
-            {
-                pcidevs[i].devfn = PCI_DEVFN(d_config->pcidevs[i].dev,
-                                             d_config->pcidevs[i].func);
-                pcidevs[i].bus = d_config->pcidevs[i].bus;
-                pcidevs[i].seg = d_config->pcidevs[i].domain;
-                pcidevs[i].flags = d_config->pcidevs[i].rdmforce &
-                                   PCI_DEV_RDM_CHECK;
-            }
-        }
-        else
-        {
-            LIBXL__LOG(CTX, LIBXL__LOG_ERROR,
-                               "Can't allocate for pcidevs.");
-            return -1;
-        }
-    }
     rdmforce = libxl_defbool_val(d_config->b_info.rdmforce) ? 1 : 0;
-
     /* Nothing to do. */
     if ( !rdmforce && !d_config->num_pcidevs )
         return 0;

+ pcidevs = malloc(d_config->num_pcidevs*sizeof(xen_guest_pcidev_info_t));
+    if ( pcidevs )
+    {
+        for (i = 0; i < d_config->num_pcidevs; i++)
+        {
+            pcidevs[i].devfn = PCI_DEVFN(d_config->pcidevs[i].dev,
+                                         d_config->pcidevs[i].func);
+            pcidevs[i].bus = d_config->pcidevs[i].bus;
+            pcidevs[i].seg = d_config->pcidevs[i].domain;
+        }
+    }
+    else
+    {
+        LIBXL__LOG(CTX, LIBXL__LOG_ERROR, "Can't allocate for pcidevs.");
+        return -1;
+    }
+
     ret = xc_domain_device_setrdm(ctx->xch, dm_domid,
                                   (uint32_t)d_config->num_pcidevs,
-                                  rdmforce,
+                                  rdmforce & PCI_DEV_RDM_CHECK,
                                   pcidevs);
-    if ( d_config->num_pcidevs )
-        free(pcidevs);
+    free(pcidevs);

     return ret;
 }


Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.