[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH][RFC] libxl: use new qemu parameters for emulated qemuu disks



Il 18/05/2015 17:43, Wei Liu ha scritto:
On Fri, May 15, 2015 at 01:54:32PM +0200, Fabio Fantoni wrote:
NOTES:
This patch is a only a fast draft for testing.

Some tests result:
At xl create cdrom empty or not  are both working, xl cd-insert is
working, xl cd-eject seems working but on xl command in linux hvm domU
return qmp error of "Device 'ide-N' is locked", in windows 7 instead
don't show the errror.
xl block-attach seems working correctly and xl block-detach works
correctly with linux hvm but not with windows 7 (seems block the disk
remove, I don't know if do the same without this patch)
Scsi disk case not tested for now.

Any comment is appreciated.

I presume you're trying to use AHCI?
Already used many times with many domUs.
  Do you notice improvement using
AHCI when booting a guest?

The more significant is with lubuntu 15.04 hvm where time boot time to login is only about a fifth!!! in comparison to without. With windows 7 pro 64 bit is different in many case, in better case the boot time is about a third, in the worst it seems to gain only 10-20% of the time. With windows 8.1 the gain is only 5-10% but with win8 the boot time is very long in any case, I don't know if caused by unexpected case in xen or windows 8 is simply bad or weighty.

The ahci support patch is here:
http://lists.xen.org/archives/html/xen-devel/2015-05/msg01804.html

This is about changes qemu parameters for other disk cases.


Signed-off-by: Fabio Fantoni <fabio.fantoni@xxxxxxx>
---
  tools/libxl/libxl_dm.c | 35 ++++++++++++++++++-----------------
  1 file changed, 18 insertions(+), 17 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 4bec5ba..6d00e38 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -811,7 +811,6 @@ static char ** libxl__build_device_model_args_new(libxl__gc 
*gc,
              int dev_number =
                  libxl__device_disk_dev_number(disks[i].vdev, &disk, &part);
              const char *format = qemu_disk_format_string(disks[i].format);
-            char *drive;
              const char *pdev_path;
if (dev_number == -1) {
@@ -822,13 +821,14 @@ static char ** 
libxl__build_device_model_args_new(libxl__gc *gc,
if (disks[i].is_cdrom) {
                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY)
-                    drive = libxl__sprintf
-                        (gc, 
"if=ide,index=%d,media=cdrom,cache=writeback,id=ide-%i",
-                         disk, dev_number);
+                    flexarray_vappend(dm_args, "-drive",
+                        GCSPRINTF("if=none,id=ide-%i,cache=writeback", dev_number), 
"-device",
+                        GCSPRINTF("ide-cd,drive=ide-%i", dev_number), NULL);
                  else
-                    drive = libxl__sprintf
-                        (gc, 
"file=%s,if=ide,index=%d,media=cdrom,format=%s,cache=writeback,id=ide-%i",
-                         disks[i].pdev_path, disk, format, dev_number);
+                    flexarray_vappend(dm_args, "-drive",
+                        
GCSPRINTF("file=%s,if=none,id=ide-%i,format=%s,cache=writeback",
+                        disks[i].pdev_path, dev_number, format), "-device",
+                        GCSPRINTF("ide-cd,drive=ide-%i", dev_number), NULL);
              } else {
                  if (disks[i].format == LIBXL_DISK_FORMAT_EMPTY) {
                      LIBXL__LOG(ctx, LIBXL__LOG_WARNING, "cannot support"
@@ -857,25 +857,26 @@ static char ** 
libxl__build_device_model_args_new(libxl__gc *gc,
                   * hd[a-d] and ignore the rest.
                   */
                  if (strncmp(disks[i].vdev, "sd", 2) == 0)
-                    drive = libxl__sprintf
-                        (gc, 
"file=%s,if=scsi,bus=0,unit=%d,format=%s,cache=writeback",
-                         pdev_path, disk, format);
+                    flexarray_vappend(dm_args, "-drive",
+                        
GCSPRINTF("file=%s,if=none,id=scsidisk-%d,format=%s,cache=writeback",
+                        pdev_path, disk, format), "-device",
+                        GCSPRINTF("scsi-hd,drive=scsidisk-%d,scsi-id=%d",
+                        disk, disk), NULL);
                  else if (disk < 6 && libxl_defbool_val(b_info->u.hvm.ahci)){
I don't see a "ahci" field in libxl idl. Did you forget to commit that
part?

Another question is that do we always want to enable AHCI? Do we want to
make it user tunable? This affects if you actually need a new field in
idl.

Wei.

                      flexarray_vappend(dm_args, "-drive",
                          
GCSPRINTF("file=%s,if=none,id=ahcidisk-%d,format=%s,cache=writeback",
-                        pdev_path, disk, format), "-device", 
GCSPRINTF("ide-hd,bus=ahci0.%d,unit=0,drive=ahcidisk-%d",
+                        pdev_path, disk, format), "-device",
+                        
GCSPRINTF("ide-hd,bus=ahci0.%d,unit=0,drive=ahcidisk-%d",
                          disk, disk), NULL);
-                    continue;
                  }else if (disk < 4)
-                    drive = libxl__sprintf
-                        (gc, 
"file=%s,if=ide,index=%d,media=disk,format=%s,cache=writeback",
-                         pdev_path, disk, format);
+                    flexarray_vappend(dm_args, "-drive",
+                        
GCSPRINTF("file=%s,if=none,id=idedisk-%d,format=%s,cache=writeback",
+                        pdev_path, disk, format), "-device", 
GCSPRINTF("ide-hd,drive=idedisk-%d",
+                        disk), NULL);
                  else
                      continue; /* Do not emulate this disk */
              }
- flexarray_append(dm_args, "-drive");
-            flexarray_append(dm_args, drive);
          }
switch (b_info->u.hvm.vendor_device) {
--
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.