[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 5/5] libxl: spawns two QEMUs for HVM guests
Stefano Stabellini writes ("Re: [PATCH v2 5/5] libxl: spawns two QEMUs for HVM guests"): > That is of great help, but this code is quite complex and I don't > understand it. I tried to complete the missing bits but I think I ended > up with something quite far from what you had in mind. At least it > compiles but it segfaults. See below: ... > +static void qdisk_spawn_outcome(libxl__egc *egc, libxl__dm_spawn_state *dmss, > + int rc) > +{ > + STATE_AO_GC(dmss->spawn.ao); > + int worst_rc = 0, i; > + > + dmss->qemus[dmss->emulator_id].rc = rc; > + > + for (i=0; i<dmss->num_qemus; i++) { > + if (libxl__spawn_inuse(&dmss->qemus[i].dmss->spawn)) > + return; > + if (dmss->qemus[i].rc < worst_rc) > + worst_rc = dmss->qemus[i].rc; > + } > + /* all qemus have completed */ > + if (worst_rc) > + domcreate_complete(egc, dmss->dcs, worst_rc); > +} You definitely need to do something in the case where worst_rc is 0. Otherwise when all your qemus have spawned libxl will just sit there. You need to do something like what currently happens in domcreate_devmodel_started: on failure, call domcreate_complete, otherwise initiate the next step. My suggested spawn_outcome function needs to be called by _both_ spawns, and only do the stuff in domcreate_devmodel_started when both are done. > static void domcreate_bootloader_done(libxl__egc *egc, > libxl__bootloader_state *bl, > int rc) > @@ -1014,8 +1033,27 @@ static void domcreate_bootloader_done(libxl__egc *egc, > dcs->dmss.dm.guest_config = dcs->guest_config; > dcs->dmss.dm.build_state = &dcs->build_state; > dcs->dmss.dm.callback = domcreate_devmodel_started; > + dcs->dmss.dm.emulator_id = QEMU_XEN_DEVICE_MODEL_ID; > + dcs->dmss.dm.qemus = libxl__malloc(gc, sizeof(struct qemu_spawn_state)); > + dcs->dmss.dm.qemus[QEMU_XEN_DEVICE_MODEL_ID].dmss = &dcs->dmss.dm; > + dcs->dmss.dm.dcs = dcs; > + dcs->dmss.dm.num_qemus = 2; > dcs->dmss.callback = domcreate_devmodel_started; > > + if (info->type == LIBXL_DOMAIN_TYPE_HVM) { > + libxl__dm_spawn_state *dmss2; > + GCNEW(dmss2); > + dmss2->guest_domid = domid; > + dmss2->spawn.ao = ao; > + dmss2->emulator_id = QEMU_XEN_PV_ID; > + dmss2->qemus = dcs->dmss.dm.qemus; > + dmss2->qemus[QEMU_XEN_PV_ID].dmss = dmss2; > + dmss2->callback = qdisk_spawn_outcome; > + dmss2->num_qemus = 2; > + dmss2->dcs = dcs; > + libxl__spawn_qdisk_backend(egc, dmss2); > + } Here you have two arrays qemus[], one in dmss and one in dmss2. One callback will happen for the spawn which uses dmss and the other for the spawn which uses dmss2. I don't think that can be what you want. I think you should just have a single array of dmss's. If you (the programmer) know that you are always going to want zero, one or two qemus, you can maybe allocate that array "statically" as part of the domain_create_state. Regardless of how you do the memory allocation, you need to be able to recover the dcs from the dmss, since the spawn outcome callback gets the dmss, so you may need to add a dcs* to the dmss. > int libxl__destroy_device_model(libxl__gc *gc, uint32_t domid) > { > + int rc; > char *path = libxl__device_model_xs_path(gc, false, > LIBXL_TOOLSTACK_DOMID, > domid, ""); > if (!xs_rm(CTX->xsh, XBT_NULL, path)) > LOG(ERROR, "xs_rm failed for %s", path); > + > + kill_device_model(gc, > + GCSPRINTF("libxl/%u/qdisk-backend-pid", domid)); > + > /* We should try to destroy the device model anyway. */ > - return kill_device_model(gc, > + rc = kill_device_model(gc, > GCSPRINTF("/local/domain/%d/image/device-model-pid", domid)); This should be a loop perhaps ? I agree with Wei that the xenstore paths should be made regular. > +typedef struct libxl__domain_create_state libxl__domain_create_state; > +struct qemu_spawn_state { > + int rc; > + struct libxl__dm_spawn_state *dmss; > +}; It might be convenient to put the rc in the dmss instead of inventing a new wrapper. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |