[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] libxl: fix cleanup of tap devices in libxl__device_destroy
On Thu, 2012-08-02 at 15:41 +0100, Ian Jackson wrote: > Ian Campbell writes ("[PATCH] libxl: fix cleanup of tap devices in > libxl__device_destroy"): > > libxl: fix cleanup of tap devices in libxl__device_destroy > > > > We pass be_path to tapdisk_destroy but we've already deleted it so it fails > > to > > read tapdisk-params. However it appears that we need to destroy the tap > > device > > after tearing down xenstore, to avoid the leak reported by Greg Wettstein in > > <201207312141.q6VLfJje012656@xxxxxxxxxxxxxxxxx>. > > > > So read the tapdisk-params in the cleanup transaction, before the remove, > > and > > pass that down to destroy_tapdisk instead. tapdisk-params may of course be > > NULL > > if the device isn't a tap device. > > > > There is no need to tear down the tap device from > > libxl__initiate_device_remove > > since this ultimately calls libxl__device_destroy. > > > > Propagate and log errors from libxl__device_destroy_tapdisk. > > Can you please wrap your commit messages to 70ish rather than 80 ? > Here is a screenshot of my email client: > http://www.chiark.greenend.org.uk/~ijackson/volatile/2012/wrap-damage.png If someone tells me the rune to put into vimrc such that "gqj" does this then sure. > > The code all looks good. Just one comment: > > > int libxl__device_destroy(libxl__gc *gc, libxl__device *dev) > > { > ... > > @@ -531,6 +533,9 @@ int libxl__device_destroy(libxl__gc *gc, > > rc = libxl__xs_transaction_start(gc, &t); > > if (rc) goto out; > > > > + /* May not exist if this is not a tap device */ > > + tapdisk_params = libxl__xs_read(gc, t, tapdisk_path); > > You can still use libxl__xs_read_checked. It considers ENOENT a > success (and therefore doesn't log about it). OK. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |