[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/vm_event: fix race between vmx_vmexit_handler() and vm_event_resume()
>>> On 27.04.17 at 19:31, <andrew.cooper3@xxxxxxxxxx> wrote: > On 27/04/17 09:52, Jan Beulich wrote: >>>>> On 27.04.17 at 10:34, <andrew.cooper3@xxxxxxxxxx> wrote: >>> On 27/04/2017 09:01, Jan Beulich wrote: >>>>>>> On 27.04.17 at 09:22, <rcojocaru@xxxxxxxxxxxxxxx> wrote: >>>>> --- a/xen/common/vm_event.c >>>>> +++ b/xen/common/vm_event.c >>>>> @@ -357,6 +357,13 @@ void vm_event_resume(struct domain *d, struct >>>>> vm_event_domain *ved) >>>>> { >>>>> vm_event_response_t rsp; >>>>> >>>>> + /* >>>>> + * vm_event_resume() runs either from XEN_DOMCTL_VM_EVENT_OP_*, or >>>>> + * EVTCHN_send from the introspection consumer. Both contexts are >>>>> + * guaranteed not to be the subject of vm_event responses. >>>>> + */ >>>>> + ASSERT(d != current->domain); >>>> What is this meant to guard against? >>> Documentation, as much as anything else. It took a long time to >>> untangle the contexts involved here; I'm not convinced the logic is safe >>> to run in context, and it certainly doesn't need to. >> Well, as said - I think it is too strict now: You only need the vCPU >> not be current afaict, and it really matters here which of the two >> "current"-s you actually mean (and it looks to me as if you mean >> the other one, guaranteeing register state to no longer be on the >> stack). > > We don't know the vcpu at this point; that information comes out of the > replies on the ring. > > We also may process multiple replies in this one loop. In the worse > case, we process one reply for every vcpu in d. > > If the ASSERT() were to be deferred until the middle of the request > loop, we could ASSERT(v != current) for every vcpu in d, but that is > identical to this single assertion. No, it isn't. It would only be if you iterated over all vCPU-s of that domain (which as you say may or may not happen). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |