[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/hvm: Add per-vcpu evtchn upcalls
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: 07 November 2014 13:02 > To: Paul Durrant > Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org) > Subject: RE: [PATCH v2] x86/hvm: Add per-vcpu evtchn upcalls > > >>> On 07.11.14 at 13:33, <Paul.Durrant@xxxxxxxxxx> wrote: > >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > >> >>> On 06.11.14 at 16:33, <paul.durrant@xxxxxxxxxx> wrote: > >> So is there really a need for a per-vCPU vector value (rather than > >> a single domain wide one)? > >> > > > > I can't stipulate that Windows gives me the same vector on every CPU, so > > yes. > > So Windows has no concept of per-CPU vectors/IRQs? All I can do is ask for an interrupt with a specific affinity to 1 or more CPUs. If I ask for one interrupt bound to all CPUs then I will get the same vector on as many CPUs an Windows can allocate it on, but that's not guaranteed to be all the CPUs in the affinity mask I asked for. Also, there will be a single lock protecting that interrupt so it's really not useful. Hence I need to ask for as many interrupts as there are CPUs, each with an affinity mask specifying the individual CPU. That way I avoid the big lock, but I cannot guarantee that I will get the same vector for each interrupt I allocate. Thus I need to be able to set each vcpu to upcall on a potentially differing vector. > I'm somewhat > surprised, since for their Hyper-V drivers in Linux they were specifically > looking for how to do this under Linux, making me assume that they > try to keep their driver behavior as similar as possible to their native > equivalents. > > >> > @@ -220,6 +227,8 @@ void hvm_assert_evtchn_irq(struct vcpu *v) > >> > > >> > if ( is_hvm_pv_evtchn_vcpu(v) ) > >> > vcpu_kick(v); > >> > + else if ( v->arch.hvm_vcpu.evtchn_upcall_vector != 0 ) > >> > + hvm_set_upcall_irq(v); > >> > >> The context code above your insertion is clearly not enforcing > >> vCPU 0 only; the code below this change is. > >> > > > > Yes, the callback via is only allowed to be issued for events bound to vcpu > > 0, although nothing ensures that it only gets delivered to vcpu0. I don't > > know what the historical reason behind that is. The whole point of the new > > vectors though is that there is one per vcpu, not just one on vcpu 0, so why > > would I want to enforce vcpu 0 only? It would defeat the entire point of > the > > patch. > > I think you misunderstood what I tried to say; I wasn't suggesting > that you should limit yourself to vCPU 0. Instead I was asking why > the non-vCPU-0-bound mechanism visible in the patch context > doesn't suit your needs (in fact I think the answer to this should be > part of the commit message). > Oh, I see. Yes, I can add a comment as to why the 'vector' callback via is not suitable. Paul > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |