[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V2 2/2] x86/vm_event: fix race between __context_switch() and vm_event_resume()
>>> On 03.05.17 at 11:10, <rcojocaru@xxxxxxxxxxxxxxx> wrote: > The introspection agent can reply to a vm_event faster than > vmx_vmexit_handler() can complete in some cases, where it is then > not safe for vm_event_set_registers() to modify v->arch.user_regs. > In the test scenario, we were stepping over an INT3 breakpoint by > setting RIP += 1. The quick reply tended to complete before the VCPU > triggering the introspection event had properly paused and been > descheduled. If the reply occurs before __context_switch() happens, > __context_switch() clobbers the reply by overwriting > v->arch.user_regs from the stack. If the reply occurs after > __context_switch(), we don't pass through __context_switch() when > transitioning to idle. This last sentence still looks to be wrong (and even self-contradictory). The reply can't occur after __context_switch() if we don't make it there in the first place. How about "If we don't pass through __context_switch() (due to switching to the idle vCPU), reply data wouldn't be picked up when switching back straight to the original vCPU"? > This patch ensures that vm_event_resume() code only sets per-VCPU > data to be used for the actual setting of registers later in > hvm_do_resume() (similar to the model used to control setting of CRs > and MSRs). > > The patch additionally removes the sync_vcpu_execstate(v) call from > vm_event_resume(), which is no longer necessary, which removes the > associated broadcast TLB flush (read: performance improvement). > > Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> > Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Code changes themselves Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |