[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] lock in vhpet

At 08:26 -0700 on 23 Apr (1335169568), Andres Lagar-Cavilla wrote:
> > At 07:36 +0000 on 23 Apr (1335166577), Zhang, Yang Z wrote:
> >> The p2m lock in __get_gfn_type_access() is the culprit. Here is the
> >> profiling data with 10 seconds:
> >>
> >> (XEN) p2m_lock 1 lock:
> >> (XEN)   lock:      190733(00000000:14CE5726), block:
> >> 67159(00000007:6AAA53F3)
> >>
> >> Those data is collected when win8 guest(16 vcpus) is idle. 16 VCPUs
> >> blocked 30 seconds with 10 sec's profiling. It means 18% of cpu cycle
> >> is waiting for the p2m lock. And those data only for idle guest. The
> >> impaction is more seriously when run some workload inside guest.  I
> >> noticed that this change was adding by cs 24770. And before it, we
> >> don't require the p2m lock in _get_gfn_type_access. So is this lock
> >> really necessary?
> >
> > Ugh; that certainly is a regression.  We used to be lock-free on p2m
> > lookups and losing that will be bad for perf in lots of ways.  IIRC the
> > original aim was to use fine-grained per-page locks for this -- there
> > should be no need to hold a per-domain lock during a normal read.
> > Andres, what happened to that code?
> The fine-grained p2m locking code is stashed somewhere and untested.
> Obviously not meant for 4.2. I don't think it'll be useful here: all vcpus
> are hitting the same gfn for the hpet mmio address.

We'll have to do _something_ for 4.2 if it's introducing an 18% CPU
overhead in an otherwise idle VM.

In any case I think this means I probably shouldn't take the patch that
turns on this locking for shadow VMs.  They do a lot more p2m lookups. 

> The other source of contention might come from hvmemul_rep_movs, which
> holds the p2m lock for the duration of the mmio operation. I can optimize
> that one using the get_gfn/get_page/put_gfn pattern mentioned above.

But wouldn't that be unsafe?  What if the p2m changes during the
operation?  Or, conversely, could you replace all uses of the lock in
p2m lookups with get_page() on the result and still get what you need?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.