|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V2 8/9] libxl_dom: Call the right switch logdirty for the right DM.
On Mon, 2012-09-17 at 19:22 +0100, Anthony PERARD wrote:
> This patch dispatch the switch logdirty call depending on which device model
> version is running.
>
> The call to qemu-xen is synchrone, not like the one to qemu-xen-traditional.
synchronous
IIRC there was talk of making the qmp calls asynchronous in the future,
since they might potentially be long running?
I also have some concerns about the existing use of the 3rd parameter to
switch_logdirty_done. In the prototype it takes an "int ok" but in the
definition takes a "int broke", which seems to be inverted. This value
is passed as the return_value argument to
libxl__xc_domain_saverestore_async_callback_done. The current callers
pass 0 (success) or -1 (error) and you follow that precedent, so you
patch is fine IMHO. But I wonder if there isn't scope for some cleanup
here somewhere.
However I don't think any of that should block this patch so:
Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
IanJ, while I was looking I noticed that remus_checkpoint_dm_saved calls
libxl__xc_domain_saverestore_async_callback_done directly instead of via
switch_logdirty_done like everyone else. Since the s_l_d cancels timers
and stuff too I wonder if this is a bug?
Ian.
>
> Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
> ---
> tools/libxl/libxl_dom.c | 45 ++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 42 insertions(+), 3 deletions(-)
>
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index e1de832..95da18e 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -685,10 +685,10 @@ static void logdirty_init(libxl__logdirty_switch *lds)
> libxl__ev_time_init(&lds->timeout);
> }
>
> -void libxl__domain_suspend_common_switch_qemu_logdirty
> - (int domid, unsigned enable, void *user)
> +static void domain_suspend_switch_qemu_xen_traditional_logdirty
> + (int domid, unsigned enable,
> + libxl__save_helper_state *shs)
> {
> - libxl__save_helper_state *shs = user;
> libxl__egc *egc = shs->egc;
> libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
> libxl__logdirty_switch *lds = &dss->logdirty;
> @@ -756,6 +756,45 @@ void libxl__domain_suspend_common_switch_qemu_logdirty
> switch_logdirty_done(egc,dss,-1);
> }
>
> +static void domain_suspend_switch_qemu_xen_logdirty
> + (int domid, unsigned enable,
> + libxl__save_helper_state *shs)
> +{
> + libxl__egc *egc = shs->egc;
> + libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
> + STATE_AO_GC(dss->ao);
> + int rc;
> +
> + rc = libxl__qmp_set_global_dirty_log(gc, domid, enable);
> + if (!rc) {
> + libxl__xc_domain_saverestore_async_callback_done(egc, shs, 0);
> + } else {
> + LOG(ERROR,"logdirty switch failed (rc=%d), aborting suspend",rc);
> + libxl__xc_domain_saverestore_async_callback_done(egc, shs, -1);
> + }
> +}
> +
> +void libxl__domain_suspend_common_switch_qemu_logdirty
> + (int domid, unsigned enable, void *user)
> +{
> + libxl__save_helper_state *shs = user;
> + libxl__egc *egc = shs->egc;
> + libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
> + STATE_AO_GC(dss->ao);
> +
> + switch (libxl__device_model_version_running(gc, domid)) {
> + case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
> + domain_suspend_switch_qemu_xen_traditional_logdirty(domid, enable,
> shs);
> + break;
> + case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
> + domain_suspend_switch_qemu_xen_logdirty(domid, enable, shs);
> + break;
> + default:
> + LOG(ERROR,"logdirty switch failed"
> + ", no valid device model version found, aborting suspend");
> + libxl__xc_domain_saverestore_async_callback_done(egc, shs, -1);
> + }
> +}
> static void switch_logdirty_timeout(libxl__egc *egc, libxl__ev_time *ev,
> const struct timeval *requested_abs)
> {
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |