[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] tools/xl: disallow PCI device assignment for HVM guest when PoD is enabled



This replicates a Xend behavior, see ec789523749 ("xend: Dis-allow
device assignment if PoD is enabled.").

Allegedly-reported-by: Konrad Wilk <konrad.wilk@xxxxxxxxxx>
Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: Ian Campbell <ian.campbell@xxxxxxxxx>
Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxx>
---
This was listed in 4.4 development update. A quick skim through
hypervisor vtd changesets suggests the situation stays unchanged since 3
years ago -- at least I didn't find any log message related to "PoD".

Rune: git log --since="2010-01-21" xen/drivers/passthrough/vtd
(It was first reported on 2010-01-21)

This patch is tested with setting memory=, maxmem= and pci=[] parameters
in both HVM and PV guests. In HVM guest's case I need to have
claim_mode=0 in /etc/xen/xl.conf to make xl actually create HVM with PoD
mode enabled.
---
 tools/libxl/xl_cmdimpl.c |   29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index c30f495..59aba7d 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -738,6 +738,7 @@ static void parse_config_data(const char *config_source,
     int pci_msitranslate = 0;
     int pci_permissive = 0;
     int i, e;
+    bool pod_enabled = false;
 
     libxl_domain_create_info *c_info = &d_config->c_info;
     libxl_domain_build_info *b_info = &d_config->b_info;
@@ -916,6 +917,12 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
         b_info->max_memkb = l * 1024;
 
+    /* If target_memkb is smaller than max_memkb, the subsequent call
+     * to libxc when building HVM omain will enable PoD mode.
+     */
+    pod_enabled = (c_info->type == LIBXL_DOMAIN_TYPE_HVM) &&
+        (b_info->target_memkb < b_info->max_memkb);
+
     libxl_defbool_set(&b_info->claim_mode, claim_mode);
 
     if (xlu_cfg_get_string (config, "on_poweroff", &buf, 0))
@@ -1468,9 +1475,9 @@ skip_vfb:
         xlu_cfg_get_defbool(config, "e820_host", &b_info->u.pv.e820_host, 0);
     }
 
+    d_config->num_pcidevs = 0;
+    d_config->pcidevs = NULL;
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
             libxl_device_pci *pcidev;
 
@@ -1488,6 +1495,24 @@ skip_vfb:
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
+    /* We cannot have PoD and PCI device assignment at the same
+     * time. VT-d engine needs to set up the entire page table for
+     * the domain. However if PoD is enabled, un-populated memory is
+     * marked as populate_on_demand, and VT-d engine won't set up page
+     * tables for them. Therefore any DMA towards those memory may
+     * cause DMA fault.
+     *
+     * This is restricted to HVM guest, as only VT-d is relevant
+     * in the counterpart in Xend. We're late in release cycle so the change
+     * should only does what's necessary. Probably we can revisit if
+     * we need to do the same thing for PV guest in the future.
+     */
+    if (c_info->type == LIBXL_DOMAIN_TYPE_HVM &&
+        d_config->num_pcidevs && pod_enabled) {
+        fprintf(stderr, "PCI device assignment for HVM guest failed due to 
Paging-on-Demand enabled\n");
+        exit(1);
+    }
+
     switch (xlu_cfg_get_list(config, "cpuid", &cpuids, 0, 1)) {
     case 0:
         {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.