[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/hvm: Add per-vcpu evtchn upcalls



>>> On 07.11.14 at 13:33, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> >>> On 06.11.14 at 16:33, <paul.durrant@xxxxxxxxxx> wrote:
>> So is there really a need for a per-vCPU vector value (rather than
>> a single domain wide one)?
>> 
> 
> I can't stipulate that Windows gives me the same vector on every CPU, so 
> yes.

So Windows has no concept of per-CPU vectors/IRQs? I'm somewhat
surprised, since for their Hyper-V drivers in Linux they were specifically
looking for how to do this under Linux, making me assume that they
try to keep their driver behavior as similar as possible to their native
equivalents.

>> > @@ -220,6 +227,8 @@ void hvm_assert_evtchn_irq(struct vcpu *v)
>> >
>> >      if ( is_hvm_pv_evtchn_vcpu(v) )
>> >          vcpu_kick(v);
>> > +    else if ( v->arch.hvm_vcpu.evtchn_upcall_vector != 0 )
>> > +        hvm_set_upcall_irq(v);
>> 
>> The context code above your insertion is clearly not enforcing
>> vCPU 0 only; the code below this change is.
>> 
> 
> Yes, the callback via is only allowed to be issued for events bound to vcpu 
> 0, although nothing ensures that it only gets delivered to vcpu0. I don't 
> know what the historical reason behind that is. The whole point of the new 
> vectors though is that there is one per vcpu, not just one on vcpu 0, so why 
> would I want to enforce vcpu 0 only? It would defeat the entire point of the 
> patch.

I think you misunderstood what I tried to say; I wasn't suggesting
that you should limit yourself to vCPU 0. Instead I was asking why
the non-vCPU-0-bound mechanism visible in the patch context
doesn't suit your needs (in fact I think the answer to this should be
part of the commit message).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.