[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v6 01/16] x86/hvm: make sure emulation is retried if domain is shutting down
The addition of commit 2df1aa01 "x86/hvm: remove hvm_io_pending() check in hvmemul_do_io()" causes a problem in migration because I/O that was caught by the test of vcpu_start_shutdown_deferral() in hvm_send_assist_req() is now considered completed rather than requiring a retry. This patch fixes the problem by having hvm_send_assist_req() return X86EMUL_RETRY rather than X86EMUL_OKAY if the vcpu_start_shutdown_deferral() test fails and then making sure that the emulation state is reset if the domain is found to be shutting down. Reported-by: Don Slutz <don.slutz@xxxxxxxxx> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> Cc: Keir Fraser <keir@xxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- xen/arch/x86/hvm/emulate.c | 2 +- xen/arch/x86/hvm/hvm.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index fe5661d..8b60843 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -183,7 +183,7 @@ static int hvmemul_do_io( else { rc = hvm_send_assist_req(s, &p); - if ( rc != X86EMUL_RETRY ) + if ( rc != X86EMUL_RETRY || curr->domain->is_shutting_down ) vio->io_state = HVMIO_none; else if ( data_is_addr || dir == IOREQ_WRITE ) rc = X86EMUL_OKAY; diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 535d622..f0cf064 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2613,7 +2613,7 @@ int hvm_send_assist_req(struct hvm_ioreq_server *s, ioreq_t *proto_p) ASSERT(s); if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) - return X86EMUL_OKAY; + return X86EMUL_RETRY; list_for_each_entry ( sv, &s->ioreq_vcpu_list, -- 1.7.10.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |