[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v3 12/15] vmx: posted-interrupt handling when vCPU is blocked




> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: Thursday, July 02, 2015 9:00 PM
> To: Dario Faggioli
> Cc: Wu, Feng; Tian, Kevin; keir@xxxxxxx; george.dunlap@xxxxxxxxxxxxx;
> xen-devel@xxxxxxxxxxxxx; jbeulich@xxxxxxxx; Zhang, Yang Z
> Subject: Re: [Xen-devel] [v3 12/15] vmx: posted-interrupt handling when vCPU
> is blocked
> 
> On 02/07/15 13:38, Dario Faggioli wrote:
> > On Thu, 2015-07-02 at 13:16 +0100, Andrew Cooper wrote:
> >> On 02/07/15 13:04, Dario Faggioli wrote:
> >>> On Thu, 2015-07-02 at 11:30 +0100, Andrew Cooper wrote:
> >>>> I can't currently decide whether this will be quicker or slower overall,
> >>>> or (most likely) it will even out to equal in the general case.
> >>>>
> >>> Well, given the thing works as you (two) just described, I think
> >>> draining the list is the only thing we can do.
> >>>
> >>> In fact, AFAICT, since we can't know for what vcpu a particular
> >>> notification is intended, we don't have alternatives to waking them all,
> >>> do we?
> >> Perhaps you misunderstand.
> >>
> > I'm quite sure I was. While I think now I'm getting it.
> >
> >> Every single vcpu has a PI descriptor which is shared memory with
> hardware.
> >>
> > Right.
> >
> >> A NV is delivered strictly when hardware atomically changes desc.on from
> >> 0 to 1.  i.e. the first time that an oustanding notification arrives.
> >> (iirc, desc.on is later cleared by hardware when the vcpu is scheduled
> >> and the vector(s) actually injected.)
> >>
> >> Part of the scheduling modifications alter when a vcpu is eligible to
> >> have NV's delivered on its behalf.  non-scheduled vcpus get NV's while
> >> scheduled vcpus have direct injection instead.
> >>
> > Blocked vcpus, AFAICT. But that's not relevant here.
> >
> >> Therefore, in the case that an NV arrives, we know for certain that one
> >> of the NV-eligible vcpus has had desc.on set by hardware, and we can
> >> uniquely identify it by searching for the vcpu for which desc.on is set.
> >>
> > Yeah, but we ca have more than one of them. You said "I can't currently
> > decide whether this will be quicker or slower", which I read like you
> > were suggesting that not draining the queue was a plausible alternative,
> > while I now think it's not.
> >
> > Perhaps you were not meaning anything like that, so it was not necessary
> > for me to point this out, in which case, sorry for the noise. :-)
> 
> To be clear, (assuming that a kicked vcpu is removed from the list),
> then both options of kicking exactly one vcpu or kicking all vcpus will
> function.  The end result after all processing of NVs will be that every
> vcpu with desc.on set will be kicked exactly once.

We need to remove the kicked vCPU from the list, this is will be included
in the next version.

Thanks,
Feng

> 
> I just was concerned about the O() of searching the list on a subsequent
> NV, knowing that we most likely took the relevant entry off the list on
> the previous NV.
> 
> >
> >> In the case of stacked NV's, we cannot associate which specific vcpu
> >> caused which NV, but we know that we will get one NV per vcpu needing
> >> kicking.
> >>
> > Exactly, and that's what I'm talking about, and why I'm saying that
> > waking everyone is the only solution. The bottom line being that, even
> > in case this is deemed too slow, we don't have the option of waking only
> > one vcpu at each NV, as we wouldn't know who to wake, and hence we'd
> > need to make things faster in some other way.
> 
> Ah - I see your point now.
> 
> Yes - kicking exactly one vcpu per NV could result in a different vcpu
> being deferred based on the interrupt activity of other vcpus and its
> position in the list.
> 
> In which case, we should eagerly kick all vcpus.
> 
> ~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.