[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 12 of 18] x86/mm: Make page_lock/unlock() in arch/x86/mm.c externally callable
On 09/12/2011 03:01, "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx> wrote: >> On 08/12/2011 22:38, "Tim Deegan" <tim@xxxxxxx> wrote: >> >>> At 02:47 -0500 on 08 Dec (1323312447), Andres Lagar-Cavilla wrote: >>>> This is necessary for a new consumer of page_lock/unlock to follow in >>>> the series. >>>> >>>> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx> >>> >>> Nak, I'm afraid. >>> >>> These were OK as local functions but if they're going to be made >>> generally visible, they need clear comments describing what this >>> locking protects and what the discipline is for avoiding deadlocks. >>> >>> Perhaps Jan or Keir can supply appropriate words. The locking was >>> introduce in this cset: >> >> It's Jan's work originally, but the basic intention of page_lock is to >> serialise pte updates. To aid with this, a page's type cannot change while >> its lock is held. > That's definitely a property I want to leverage. > >> No lock nests inside a page lock (not even other page >> locks) so there is no deadlock risk. > There's no way to not nest when sharing two pages, but I always make sure > I lock in ascending order. The fact there is currently no nesting gives you some freedom. Ordered locking of other page locks is obviously going to be safe. So is taking a page lock inside any other lock. Taking some other lock inside a page lock is all that needs care, but there probably aren't many locks that currently can have page locks nested inside them (and hence you would risk deadlock by nesting the other way). -- Keir > Thanks, > Andres >> >>> changeset: 17846:09dd5999401b >>> user: Keir Fraser <keir.fraser@xxxxxxxxxx> >>> date: Thu Jun 12 18:14:00 2008 +0100 >>> files: xen/arch/x86/domain.c xen/arch/x86/domain_build.c >>> xen/arch/x86/mm.c >>> description: >>> x86: remove use of per-domain lock from page table entry handling >>> >>> This change results in a 5% performance improvement for kernel >>> builds >>> on dual-socket quad-core systems (which is what I used for reference >>> for both 32- and 64-bit). Along with that, the amount of time >>> reported >>> as spent in the kernel gets reduced by almost 25% (the fraction of >>> time spent in the kernel is generally reported significantly higher >>> under Xen than with a native kernel). >>> >>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx> >>> Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx> >>> >>> Tim. >> >> >> > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |