[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] lock in vhpet
>> -----Original Message----- >> From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx] >> Sent: Wednesday, April 25, 2012 10:42 AM >> To: Zhang, Yang Z >> Cc: Tim Deegan; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser >> Subject: RE: [Xen-devel] lock in vhpet >> >> >> -----Original Message----- >> >> From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx] >> >> Sent: Wednesday, April 25, 2012 10:31 AM >> >> To: Zhang, Yang Z >> >> Cc: Tim Deegan; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser >> >> Subject: RE: [Xen-devel] lock in vhpet >> >> >> >> > >> >> >> -----Original Message----- >> >> >> From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx] >> >> >> Sent: Wednesday, April 25, 2012 9:40 AM >> >> >> To: Zhang, Yang Z >> >> >> Cc: Tim Deegan; xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser >> >> >> Subject: RE: [Xen-devel] lock in vhpet >> >> >> >> >> >> >> -----Original Message----- >> >> >> >> From: Tim Deegan [mailto:tim@xxxxxxx] >> >> >> >> Sent: Tuesday, April 24, 2012 5:17 PM >> >> >> >> To: Zhang, Yang Z >> >> >> >> Cc: andres@xxxxxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx; >> >> >> >> Keir Fraser >> >> >> >> Subject: Re: [Xen-devel] lock in vhpet >> >> >> >> >> >> >> >> At 08:58 +0000 on 24 Apr (1335257909), Zhang, Yang Z wrote: >> >> >> >> > > -----Original Message----- >> >> >> >> > > From: Andres Lagar-Cavilla [mailto:andres@xxxxxxxxxxxxxxxx] >> >> >> >> > > Sent: Tuesday, April 24, 2012 1:19 AM >> >> >> >> > > >> >> >> >> > > Let me know if any of this helps >> >> >> >> > No, it not works. >> >> >> >> >> >> >> >> Do you mean that it doesn't help with the CPU overhead, or that >> >> >> >> it's broken in some other way? >> >> >> >> >> >> >> > It cannot help with the CPU overhead >> >> >> >> >> >> Yang, is there any further information you can provide? A rough >> >> >> idea of where vcpus are spending time spinning for the p2m lock >> >> >> would be tremendously useful. >> >> >> >> >> > I am doing the further investigation. Hope can get more useful >> >> > information. >> >> >> >> Thanks, looking forward to that. >> >> >> >> > But actually, the first cs introduced this issue is 24770. When >> >> > win8 booting and if hpet is enabled, it will use hpet as the time >> >> > source and there have lots of hpet access and EPT violation. In EPT >> >> > violation handler, it call get_gfn_type_access to get the mfn. The >> >> > cs 24770 introduces the gfn_lock for p2m lookups, and then the >> issue >> happens. >> >> > After I removed the gfn_lock, the issue goes. But in latest xen, >> >> > even I remove this lock, it still shows high cpu utilization. >> >> > >> >> >> >> It would seem then that even the briefest lock-protected critical >> >> section would cause this? In the mmio case, the p2m lock taken in the >> >> hap fault handler is held during the actual lookup, and for a couple >> >> of branch instructions afterwards. >> >> >> >> In latest Xen, with lock removed for get_gfn, on which lock is time >> >> spent? >> > Still the p2m_lock. >> >> How are you removing the lock from get_gfn? >> >> The p2m lock is taken on a few specific code paths outside of get_gfn >> (change >> type of an entry, add a new p2m entry, setup and teardown), and I'm >> surprised >> any of those code paths is being used by the hpet mmio handler. > > Sorry, what I said maybe not accurate. In latest xen, I use a workaround > way to skip calling get_gfn_type_access in hvm_hap_nested_page_fault(). So > the p2m_lock is still existing. > Now, I found the contention of p2m_lock is coming from __hvm_copy. In mmio > handler, it has some code paths to call > it(hvm_fetch_from_guest_virt_nofault(), hvm_copy_from_guest_virt()). When > lots of mmio access happened, the contention is very obviously. Thanks. Can you please try this: http://lists.xen.org/archives/html/xen-devel/2012-04/msg01861.html in conjunction with the patch below? Andres diff -r 7a7443e80b99 xen/arch/x86/hvm/hvm.c --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2383,6 +2383,8 @@ static enum hvm_copy_result __hvm_copy( while ( todo > 0 ) { + struct page_info *pg; + count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo); if ( flags & HVMCOPY_virt ) @@ -2427,7 +2429,11 @@ static enum hvm_copy_result __hvm_copy( put_gfn(curr->domain, gfn); return HVMCOPY_bad_gfn_to_mfn; } + ASSERT(mfn_valid(mfn)); + pg = mfn_to_page(mfn); + ASSERT(get_page(pg, curr->domain)); + put_gfn(curr->domain, gfn); p = (char *)map_domain_page(mfn) + (addr & ~PAGE_MASK); @@ -2457,7 +2463,7 @@ static enum hvm_copy_result __hvm_copy( addr += count; buf += count; todo -= count; - put_gfn(curr->domain, gfn); + put_page(pg); } return HVMCOPY_okay; > > yang > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |