[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] lock in vhpet

The p2m lock in __get_gfn_type_access() is the culprit. Here is the profiling 
data with 10 seconds:

(XEN) p2m_lock 1 lock:
(XEN)   lock:      190733(00000000:14CE5726), block:       

Those data is collected when win8 guest(16 vcpus) is idle. 16 VCPUs blocked 30 
seconds with 10 sec's profiling. It means 18% of cpu cycle is waiting for the 
p2m lock. And those data only for idle guest. The impaction is more seriously 
when run some workload inside guest. 
I noticed that this change was adding by cs 24770. And before it, we don't 
require the p2m lock in _get_gfn_type_access. So is this lock really necessary?

best regards

> -----Original Message-----
> From: Keir Fraser [mailto:keir.xen@xxxxxxxxx] On Behalf Of Keir Fraser
> Sent: Thursday, April 19, 2012 4:47 PM
> To: Tim Deegan; Zhang, Yang Z
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] lock in vhpet
> On 19/04/2012 09:27, "Tim Deegan" <tim@xxxxxxx> wrote:
> > At 05:19 +0000 on 19 Apr (1334812779), Zhang, Yang Z wrote:
> >> There have no problem with this patch, it works well. But it cannot
> >> fix the win8 issue. It seems there has some other issues with hpet. I
> >> will look into it.  Thanks for your quick patch.
> >
> > The lock in hvm_get_guest_time() will still be serializing the hpet
> > reads.  But since it needs to update a shared variable, that will need
> > to haul cachelines around anyway.
> Yes, that's true. You could try the attached hacky patch out of interest, to 
> see
> what that lock is costing you in your scenario. But if we want consistent
> monotonically-increasing guest time, I suspect we can't get rid of the lock, 
> so
> that's going to limit our scalability unavoidably. Shame.
>  -- Keir
> > Tim.
> >

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.