[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V6 0/11] Paravirtualized ticketlocks



On Sat, Mar 31, 2012 at 09:37:45AM +0530, Srivatsa Vaddagiri wrote:
> * Thomas Gleixner <tglx@xxxxxxxxxxxxx> [2012-03-31 00:07:58]:
> 
> > I know that Peter is going to go berserk on me, but if we are running
> > a paravirt guest then it's simple to provide a mechanism which allows
> > the host (aka hypervisor) to check that in the guest just by looking
> > at some global state.
> > 
> > So if a guest exits due to an external event it's easy to inspect the
> > state of that guest and avoid to schedule away when it was interrupted
> > in a spinlock held section. That guest/host shared state needs to be
> > modified to indicate the guest to invoke an exit when the last nested
> > lock has been released.
> 
> I had attempted something like that long back:
> 
> http://lkml.org/lkml/2010/6/3/4
> 
> The issue is with ticketlocks though. VCPUs could go into a spin w/o
> a lock being held by anybody. Say VCPUs 1-99 try to grab a lock in
> that order (on a host with one cpu). VCPU1 wins (after VCPU0 releases it)
> and releases the lock. VCPU1 is next eligible to take the lock. If 
> that is not scheduled early enough by host, then remaining vcpus would keep 
> spinning (even though lock is technically not held by anybody) w/o making 
> forward progress.
> 
> In that situation, what we really need is for the guest to hint to host
> scheduler to schedule VCPU1 early (via yield_to or something similar). 
> 
> The current pv-spinlock patches however does not track which vcpu is
> spinning at what head of the ticketlock. I suppose we can consider 
> that optimization in future and see how much benefit it provides (over
> plain yield/sleep the way its done now).

Right. I think Jeremy played around with this some time?
> 
> Do you see any issues if we take in what we have today and address the
> finer-grained optimization as next step?

I think that is the proper course - these patches show
that on baremetal we don't incur performance regressions and in
virtualization case we benefit greatly. Since these are the basic
building blocks of a kernel - taking it slow and just adding
this set of patches for v3.5 is a good idea - and then building on top
of that for further refinement.

> 
> - vatsa 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.