[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2][RFC] libxl: Add AHCI support for upstream qemu



Il 18/05/2015 17:53, Wei Liu ha scritto:
On Thu, May 14, 2015 at 01:11:13PM +0200, Fabio Fantoni wrote:
Usage:
ahci=0|1 (default=0)

If enabled adds ich9 disk controller in ahci mode and uses it with
upstream qemu to emulate disks instead of ide.
Is ICH9 available in our default setup?

Why do we not always enable AHCI?

ahci seems require ich9 controller (default in q35 but not in older used by xen), I think that change it ide->ahci automatically for all cases may causes problems, probably in save/restore and more probably in windows <=7 without pv where change ide<->ahci require a registry change for not have blue screen.


It doesn't support cdroms which still use ide (cdroms will use "-device
ide-cd" as new qemu parameters)
Ahci requires new qemu parameters but for now other emulated disks cases
remains with old ones because automatic bus selection seems bugged in
qemu using new parameters. (I'll retry)

Buggy as in? Have you reported to QEMU upstream?

Already reported long time ago in xen-devel and qemu-devel, the only reply from qemu-devel was to use fixed bus and is what I did in v2 of this patch.
Can someone tell me if can be a problem fixed bus?


NOTES:
This patch is a only a fast draft for testing.
Tested with 1 and 6 disks on ubuntu 15.04 hvm, windows 7 and windows 8
domUs.
Doc entry and libxl.h define should be added, I'll do.
Other emulated disks cases should be converted to use new qemu
parameters but probably a fix in qemu is needed.

Any comment is appreciated.

Signed-off-by: Fabio Fantoni <fabio.fantoni@xxxxxxx>

Changes in v2:
- libxl_dm.c: manual bus and unit selection (as workaround to qemu bug)
to have multiple disks working.
What's the relation between this patch the other patch you posted later?

---
  tools/libxl/libxl_create.c  |  1 +
  tools/libxl/libxl_dm.c      | 10 +++++++++-
  tools/libxl/libxl_types.idl |  1 +
  tools/libxl/xl_cmdimpl.c    |  1 +
  4 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index f0da7dc..fcfe24a 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -322,6 +322,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
          libxl_defbool_setdefault(&b_info->u.hvm.nested_hvm,         false);
          libxl_defbool_setdefault(&b_info->u.hvm.usb,                false);
          libxl_defbool_setdefault(&b_info->u.hvm.xen_platform_pci,   true);
+        libxl_defbool_setdefault(&b_info->u.hvm.ahci,               false);
libxl_defbool_setdefault(&b_info->u.hvm.spice.enable, false);
          if (!libxl_defbool_val(b_info->u.hvm.spice.enable) &&
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 0c6408d..4bec5ba 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -804,6 +804,8 @@ static char ** libxl__build_device_model_args_new(libxl__gc 
*gc,
      flexarray_append(dm_args, libxl__sprintf(gc, "%"PRId64, ram_size));
if (b_info->type == LIBXL_DOMAIN_TYPE_HVM) {
+        if (libxl_defbool_val(b_info->u.hvm.ahci))
+            flexarray_append_pair(dm_args, "-device", "ahci,id=ahci0");
          for (i = 0; i < num_disks; i++) {
              int disk, part;
              int dev_number =
@@ -858,7 +860,13 @@ static char ** 
libxl__build_device_model_args_new(libxl__gc *gc,
                      drive = libxl__sprintf
                          (gc, 
"file=%s,if=scsi,bus=0,unit=%d,format=%s,cache=writeback",
                           pdev_path, disk, format);
-                else if (disk < 4)
+                else if (disk < 6 && libxl_defbool_val(b_info->u.hvm.ahci)){
And you choose 6 because?

Because the ich9 ahci controller have 6 channel, so is the max, ide instead have 2 channel with master/slave.


Wei.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.