[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 03/13] libxl: convert libxl_domain_destroy to an async op



Roger Pau Monne writes ("[PATCH v6 03/13] libxl: convert libxl_domain_destroy 
to an async op"):
> This change introduces some new structures, and breaks the mutual
> dependency that libxl_domain_destroy and libxl__destroy_device_model
> had. This is done by checking if the domid passed to
> libxl_domain_destroy has a stubdom, and then having the bulk of the
> destroy machinery in a separate function (libxl__destroy_domid) that
> doesn't check for stubdom presence, since we check for it in the upper
> level function. The reason behind this change is the need to use
> structures for ao operations, and it was impossible to have two
> different self-referencing structs.

See below for a couple of typos, but:

Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>

> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 8612fe4..8478606 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
...
> +/* Prepare a bunch of devices for addition/removal. Every ao_device in
> + * ao_devices is set to 'active', and the ao_device 'base' filed is set to
                                                              ^^^^^
> + * the one pointed by aodevs.

field

> + */
> +_hidden void libxl__prepare_ao_devices(libxl__ao *ao,
> +                                       libxl__ao_devices *aodevs);
> +
> +/* Generic callback to use when adding/removing several devices, this will
> + * check if the given aodev is the last one, and call the callback in the
> + * parent libxl__ao_devices struct, passing the appropiate error if found.
                                                   ^^^^^^^^^^
appropriate

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.