[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] libxl: change default QEMU machine to pc-i440fx-1.6



Choose pc-i440fx-1.6 instead of pc for HVM guests, so that we know for
sure what is the machine that we are emulating.

Use pc-i440fx-1.6 regardless of the xen_platform_pci option. Add the
xen-platform device if requested. Choose slot 2 for the xen-platform
device for compatibility with current installations. In case of Intel
graphic passthrough, slot 2 might be needed by the grafic card. However
now that we can specify the slot explicitly, it is easy to change the
position of the xen-platform device on the PCI bus if graphic
passthrough is enabled.

Move the machine options earlier, before any other emulated devices
options. Otherwise the selected PCI slot for the xen-platform device is
not available in QEMU.

Specify PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off, because
differently from xenfv, the other QEMU machines do not have that option
off by default.

This patch does not change the emulated environment in the guest.

Refer to this thread: http://marc.info/?l=xen-devel&m=140023775929625&w=2

Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 8abed7b..fef684f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -476,6 +476,29 @@ static char ** 
libxl__build_device_model_args_new(libxl__gc *gc,
         flexarray_vappend(dm_args, "-k", keymap, NULL);
     }
 
+    flexarray_append(dm_args, "-machine");
+    switch (b_info->type) {
+    case LIBXL_DOMAIN_TYPE_PV:
+        flexarray_append(dm_args, "xenpv");
+        for (i = 0; b_info->extra_pv && b_info->extra_pv[i] != NULL; i++)
+            flexarray_append(dm_args, b_info->extra_pv[i]);
+        break;
+    case LIBXL_DOMAIN_TYPE_HVM:
+        flexarray_append(dm_args, "pc-i440fx-1.6,accel=xen");
+        flexarray_append(dm_args, "-global");
+        flexarray_append(dm_args, 
"PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off");
+        if (libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
+            flexarray_append(dm_args, "-device");
+            flexarray_append(dm_args, "xen-platform,addr=0x2");
+        }
+        for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
+            flexarray_append(dm_args, b_info->extra_hvm[i]);
+        break;
+    default:
+        abort();
+    }
+
+
     if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         int ioemu_nics = 0;
 
@@ -645,29 +668,6 @@ static char ** 
libxl__build_device_model_args_new(libxl__gc *gc,
     for (i = 0; b_info->extra && b_info->extra[i] != NULL; i++)
         flexarray_append(dm_args, b_info->extra[i]);
 
-    flexarray_append(dm_args, "-machine");
-    switch (b_info->type) {
-    case LIBXL_DOMAIN_TYPE_PV:
-        flexarray_append(dm_args, "xenpv");
-        for (i = 0; b_info->extra_pv && b_info->extra_pv[i] != NULL; i++)
-            flexarray_append(dm_args, b_info->extra_pv[i]);
-        break;
-    case LIBXL_DOMAIN_TYPE_HVM:
-        if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
-            /* Switching here to the machine "pc" which does not add
-             * the xen-platform device instead of the default "xenfv" machine.
-             */
-            flexarray_append(dm_args, "pc,accel=xen");
-        } else {
-            flexarray_append(dm_args, "xenfv");
-        }
-        for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
-            flexarray_append(dm_args, b_info->extra_hvm[i]);
-        break;
-    default:
-        abort();
-    }
-
     ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
     flexarray_append(dm_args, "-m");
     flexarray_append(dm_args, libxl__sprintf(gc, "%"PRId64, ram_size));

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.