[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XEN][RFC PATCH V2 15/17] xl: support spawn/destroy on multiple device model



On 08/23/2012 02:56 PM, Ian Campbell wrote:
On Wed, 2012-08-22 at 13:32 +0100, Julien Grall wrote:
@@ -991,12 +1057,11 @@ static void domcreate_launch_dm(libxl__egc *egc, 
libxl__multidev *multidev,
          libxl__device_console_dispose(&console);

          if (need_qemu) {
-            dcs->dmss.dm.guest_domid = domid;
-            libxl__spawn_local_dm(egc,&dcs->dmss.dm);
+            assert(dcs->dmss);
+            domcreate_spawn_devmodel(egc, dcs, dcs->current_dmid);
              return;
          } else {
-            assert(!dcs->dmss.dm.guest_domid);
-            domcreate_devmodel_started(egc,&dcs->dmss.dm, 0);
+            assert(!dcs->dmss);
Doesn't this stop progress in this case meaning we'll never get to the
end of the async op?

Indeed, I will fix on the next patch version.

              return;
          }
      }
[..]
@@ -1044,7 +1044,8 @@ int libxl__wait_for_device_model(libxl__gc *gc,
                                   void *check_callback_userdata)
  {
      char *path;
-    path = libxl__sprintf(gc, "/local/domain/0/device-model/%d/state", domid);
+    path = libxl__sprintf(gc, "/local/domain/0/dms/%u/%u/state",
+                          domid, dmid);
Isn't this control path shared with qemu? I'm not sure we can just
change it like that? We need to at least retain compatibility with
pre-disag qemus.

Indeed, as we have multiple QEMUs for a same domain, we need
to have one control path by QEMU.

Pre-disag QEMUs cannot work with my changes inside the Xen.
Xen will not forward by default ioreq if there is no ioreq server.
  const char *libxl__domain_device_model(libxl__gc *gc,
-                                       const libxl_domain_build_info *info)
+                                       uint32_t dmid,
+                                       const libxl_domain_build_info *b_info)
  {
      libxl_ctx *ctx = libxl__gc_owner(gc);
      const char *dm;
+    libxl_domain_config *guest_config = CONTAINER_OF(b_info, *guest_config,
+                                                     b_info);

-    if (libxl_defbool_val(info->device_model_stubdomain))
+    if (libxl_defbool_val(guest_config->b_info.device_model_stubdomain))
You just extracted guest_config from b_info but you still have the
b_info point to hand. Why not use it? Likewise a few more times below.
An error, will be fix on next patch version.
+     * PCI device number. Before 3, we have IDE, ISA, SouthBridge and
+     * XEN PCI. Theses devices will be emulate in each QEMU, but only
+     * one QEMU (the one which emulates default device) will register
+     * these devices through Xen PCI hypercall.
+     */
+    static unsigned int bdf = 3;
Do you mean const rather than static?

No static. With QEMU disaggregation, the toolstack allocate
BDF incrementaly. QEMU is unable to know if a BDF is already
allocated in another QEMU.
For the moment, bdf variable is used to give a devfn for
network card and VGA.

Isn't this baking in some implementation detail from the current qemu
version? What happens if it changes?

I don't have another way for the moment. I would be happy,
if someone have a good solution.

+
      libxl_ctx *ctx = libxl__gc_owner(gc);
      const libxl_domain_create_info *c_info =&guest_config->c_info;
      const libxl_domain_build_info *b_info =&guest_config->b_info;
+    const libxl_dm *dm_config =&guest_config->dms[dmid];
      const libxl_device_disk *disks = guest_config->disks;
      const libxl_device_nic *nics = guest_config->nics;
      const int num_disks = guest_config->num_disks;
      const int num_nics = guest_config->num_nics;
-    const libxl_vnc_info *vnc = libxl__dm_vnc(guest_config);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(dmid, guest_config);
      const libxl_sdl_info *sdl = dm_sdl(guest_config);
      const char *keymap = dm_keymap(guest_config);
      flexarray_t *dm_args;
      int i;
      uint64_t ram_size;
+    uint32_t cap_ui = dm_config->capabilities&  LIBXL_DM_CAP_UI;
+    uint32_t cap_ide = dm_config->capabilities&  LIBXL_DM_CAP_IDE;
+    uint32_t cap_serial = dm_config->capabilities&  LIBXL_DM_CAP_SERIAL;
+    uint32_t cap_audio = dm_config->capabilities&  LIBXL_DM_CAP_AUDIO;
->capabilities is defined as 64 bits, but you use 32 here, which happens
to work if you know what the actual values of the enum are but whoever
adds the 33rd capability will probably get it wrong.

      bool cap_foo = !! (dm_....capabiltieis&  LIBXL_DM_CAP_FOO)

would probably work?
Indeed, will be fix in next patch version.

      dm_args = flexarray_make(16, 1);
      if (!dm_args)
@@ -348,11 +389,12 @@ static char ** 
libxl__build_device_model_args_new(libxl__gc *gc,
                        "-xen-domid",
                        libxl__sprintf(gc, "%d", guest_domid), NULL);

+    flexarray_append(dm_args, "-nodefaults");
Does this not cause a change in behaviour other than what you've
accounted for here?
 By default QEMU emulates VGA card, and a network card. This options,
disabled it  and avoid to add "-net none".
I added it after a discussion on my first patch series.
https://lists.gnu.org/archive/html/qemu-devel/2012-03/msg04767.html

@@ -528,65 +583,69 @@ static char ** 
libxl__build_device_model_args_new(libxl__gc *gc,
          abort();
      }

-    ram_size = libxl__sizekb_to_mb(b_info->max_memkb - b_info->video_memkb);
+    // Allocate ram space of 32Mo per previous device model to store rom
What is this about?

(also that Mo looks a bit odd in among all these mb's)

It's space for ROM allocation, like vga, rtl8139 roms ...
Each QEMU can load ROM and memory, but the memory
allocator consider that it's alone. It starts to allocate
ROM space from the end of memory RAM.

It's a solution suggest by Stefano, it's avoid modification
in QEMU. As we don't know the number of ROM and their
size per QEMU, we chose a space of 32 Mo to be sure, but in
fine most of time memory is not allocated.

--
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.