[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch v6 11/13] tools/libxc: x86 HVM restore code



On 18/07/14 15:38, Wen Congyang wrote:
> At 2014/7/8 1:38, Andrew Cooper Wrote:
>> Restore the x86 HVM specific parts of a domain.  This is the
>> HVM_CONTEXT and
>> HVM_PARAMS records.
>>
>> There is no need for any page localisation.
>>
>> This also includes writing the trailing qemu save record to a file
>> because
>> this is what libxc currently does.  This is intended to be moved into
>> libxl
>> proper in the future.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> ---
>>   tools/libxc/saverestore/common.h          |    1 +
>>   tools/libxc/saverestore/restore_x86_hvm.c |  336
>> +++++++++++++++++++++++++++++
>>   2 files changed, 337 insertions(+)
>>   create mode 100644 tools/libxc/saverestore/restore_x86_hvm.c
>>
>> diff --git a/tools/libxc/saverestore/common.h
>> b/tools/libxc/saverestore/common.h
>> index 8d125f9..4894cac 100644
>> --- a/tools/libxc/saverestore/common.h
>> +++ b/tools/libxc/saverestore/common.h
>> @@ -241,6 +241,7 @@ extern struct xc_sr_save_ops save_ops_x86_pv;
>>   extern struct xc_sr_save_ops save_ops_x86_hvm;
>>
>>   extern struct xc_sr_restore_ops restore_ops_x86_pv;
>> +extern struct xc_sr_restore_ops restore_ops_x86_hvm;
>>
>>   struct xc_sr_record
>>   {
>> diff --git a/tools/libxc/saverestore/restore_x86_hvm.c
>> b/tools/libxc/saverestore/restore_x86_hvm.c
>> new file mode 100644
>> index 0000000..0004dee
>> --- /dev/null
>> +++ b/tools/libxc/saverestore/restore_x86_hvm.c
>> @@ -0,0 +1,336 @@
>> +#include <assert.h>
>> +#include <arpa/inet.h>
>> +
>> +#include "common_x86.h"
>> +
>> +/* TODO: remove */
>> +static int handle_toolstack(struct xc_sr_context *ctx, struct
>> xc_sr_record *rec)
>> +{
>> +    xc_interface *xch = ctx->xch;
>> +    int rc;
>> +
>> +    if ( !ctx->restore.callbacks ||
>> !ctx->restore.callbacks->toolstack_restore )
>> +        return 0;
>> +
>> +    rc = ctx->restore.callbacks->toolstack_restore(ctx->domid,
>> rec->data, rec->length,
>> +                                                  
>> ctx->restore.callbacks->data);
>> +    if ( rc < 0 )
>> +        PERROR("restoring toolstack");
>> +    return rc;
>> +}
>> +
>> +/*
>> + * Process an HVM_CONTEXT record from the stream.
>> + */
>> +static int handle_hvm_context(struct xc_sr_context *ctx, struct
>> xc_sr_record *rec)
>> +{
>> +    xc_interface *xch = ctx->xch;
>> +    int rc;
>> +
>> +    rc = xc_domain_hvm_setcontext(xch, ctx->domid, rec->data,
>> rec->length);
>> +    if ( rc < 0 )
>> +        PERROR("Unable to restore HVM context");
>> +    return rc;
>> +}
>> +
>> +/*
>> + * Process an HVM_PARAMS record from the stream.
>> + */
>> +static int handle_hvm_params(struct xc_sr_context *ctx, struct
>> xc_sr_record *rec)
>> +{
>> +    xc_interface *xch = ctx->xch;
>> +    struct xc_sr_rec_hvm_params *hdr = rec->data;
>> +    struct xc_sr_rec_hvm_params_entry *entry = hdr->param;
>> +    unsigned int i;
>> +    int rc;
>> +
>> +    if ( rec->length < sizeof(*hdr)
>> +         || rec->length < sizeof(*hdr) + hdr->count * sizeof(*entry) )
>> +    {
>> +        ERROR("hvm_params record is too short");
>> +        return -1;
>> +    }
>> +
>> +    for ( i = 0; i < hdr->count; i++, entry++ )
>> +    {
>> +        switch ( entry->index )
>> +        {
>> +        case HVM_PARAM_CONSOLE_PFN:
>> +            ctx->restore.console_mfn = entry->value;
>> +            xc_clear_domain_page(xch, ctx->domid, entry->value);
>> +            break;
>> +        case HVM_PARAM_STORE_PFN:
>> +            ctx->restore.xenstore_mfn = entry->value;
>> +            xc_clear_domain_page(xch, ctx->domid, entry->value);
>> +            break;
>> +        case HVM_PARAM_IOREQ_PFN:
>> +        case HVM_PARAM_BUFIOREQ_PFN:
>> +            xc_clear_domain_page(xch, ctx->domid, entry->value);
>
> ioreq page may contain pending i/o request, so I think we should not
> clear
> them.
>
> Thanks
> Wen Congyang

I noticed that in your other series.

While I can see why it is causing problems for you, avoiding clearing
the page only hides the problem; it doesn't solve it.

Amongst other things, avoiding clearing the page could result in qemu
binding to the wrong event channels.  Without sending the other IO
emulation state, leaving stale ioreqs in the rings is dangerous.

All of this comes as a result of attempting to performs actions (i.e.
migrating an essentially unpaused VM) outside of the original designed
usecase, and you are going to have to do more than just avoiding
cleaning these pages.

~Andrew

>
>> +            break;
>> +        }
>> +
>> +        rc = xc_set_hvm_param(xch, ctx->domid, entry->index,
>> entry->value);
>> +        if ( rc < 0 )
>> +        {
>> +            PERROR("set HVM param %"PRId64" = 0x%016"PRIx64,
>> +                   entry->index, entry->value);
>> +            return rc;
>> +        }
>> +    }
>> +    return 0;
>> +}
>> +


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.