[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 09 of 13 RFC] libxl: destroy devices before device model
On Wed, 18 Jan 2012, Roger Pau Monne wrote: > # HG changeset patch > # User Roger Pau Monne <roger.pau@xxxxxxxxxxxxx> > # Date 1326728472 -3600 > # Node ID b5d63cf90d4ef8d222ae282e279b90b7f73f18c3 > # Parent 5849bf7c4507edbe900de00332f1218de2d9f45f > libxl: destroy devices before device model > > Destroying the device model before unplugging the devices prevented > from doing a clean unplug, since libxl was waiting for dm devices > to shutdown when the userspace process was already killed. I think that this change is correct but have you tested it with any backends running in qemu? > Signed-off-by: Roger Pau Monne <roger.pau@xxxxxxxxxxxxx> > > diff -r 5849bf7c4507 -r b5d63cf90d4e tools/libxl/libxl.c > --- a/tools/libxl/libxl.c Mon Jan 16 16:40:40 2012 +0100 > +++ b/tools/libxl/libxl.c Mon Jan 16 16:41:12 2012 +0100 > @@ -802,15 +802,15 @@ int libxl_domain_destroy(libxl_ctx *ctx, > if (rc < 0) { > LIBXL__LOG_ERRNOVAL(ctx, LIBXL__LOG_ERROR, rc, "xc_domain_pause > failed for %d", domid); > } > + if (libxl__devices_destroy(gc, domid) < 0) > + LIBXL__LOG(ctx, LIBXL__LOG_ERROR, > + "libxl__devices_destroy failed for %d", domid); > if (dm_present) { > if (libxl__destroy_device_model(gc, domid) < 0) > LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "libxl__destroy_device_model > failed for %d", domid); > > libxl__qmp_cleanup(gc, domid); > } > - if (libxl__devices_destroy(gc, domid) < 0) > - LIBXL__LOG(ctx, LIBXL__LOG_ERROR, > - "libxl__devices_destroy failed for %d", domid); > > vm_path = libxl__xs_read(gc, XBT_NULL, libxl__sprintf(gc, "%s/vm", > dom_path)); > if (vm_path) > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |