[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] Dont' round-robin the callback interrupt



Keir,

  When we connect our platform driver interrupt, Windows is at liberty to 
allocate vectors on as many or as few cpus as it wishes. I've seen cases where 
it will *not* allocate us a vector on vcpu 0, so we cannot force vcpu 0. Older 
frontends assume vcpu 0, but should function ok if the interrupt does not move 
around.
  However, that's not the motivation for this patch. In the windows code, we 
only bind event channels to vcpu 0 since we cannot get callback interrupts on 
multiple vcpus simultaneously, since the interrupt is level sensitive. Thus 
round-robining is wasteful in terms of kicking certain data structures between 
caches (assuming a reasonably constant vcpu -> pcpu mapping).

  Paul

> -----Original Message-----
> From: Keir Fraser
> Sent: 12 July 2010 17:16
> To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: Tim Deegan
> Subject: Re: [Xen-devel] [PATCH] Dont' round-robin the callback
> interrupt
> 
> On 12/07/2010 17:05, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>
> wrote:
> 
> > On 12/07/2010 16:52, "Paul Durrant" <paul.durrant@xxxxxxxxxx>
> wrote:
> >
> >> Dont' round-robin the callback interrupt.
> >>
> >> Arrange that the event channel callback interrupt always goes to
> the
> >> lowest vcpu with a matching local apic. This should, in most
> cases,
> >> be VCPU0 (to which all event channels are bound for HVM guests)
> but
> >> this cannot be guaranteed.
> >>
> >> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> >> CC: Tim Deegan <tim.deegan@xxxxxxxxxx>
> >
> > PV drivers should be handling callback on CPU != 0. The example
> Linux PV
> > drivers have done that for a very long time indeed. If a
> workaround is
> > needed for broken drivers then we need some way to gate it, and we
> should in
> > that case force delivery to VCPU0, as we do for some timer
> interrupts. The
> > forcing appears to do no harm even if not architecturally correct,
> and after
> > all we would be going to these lengths only because delivery to
> VCPU-not-0
> > *certainly* doesn't work.
> 
> Actually given this is probably XS/XCP Win drivers we're talking
> about,
> aren't they expected to be upgraded when moving to an upgraded host?
> In
> which case the fix should be in the upgraded drivers?
> 
>  -- Keir
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.