[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 5/5] mm: Don't hold heap lock in alloc_heap_pages() longer than necessary
Once pages are removed from the heap we don't need to hold the heap lock. It is especially useful to drop it for an unscrubbed buddy since we will be scrubbing it. Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> --- Changes in v2: * Moved spin_unlock() after d->last_alloc_node assignment xen/common/page_alloc.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 6c08983..8df92aa 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -860,6 +860,7 @@ static struct page_info *alloc_heap_pages( struct page_info *pg; bool need_tlbflush = false; uint32_t tlbflush_timestamp = 0; + unsigned int dirty_cnt = 0; /* Make sure there are enough bits in memflags for nodeID. */ BUILD_BUG_ON((_MEMF_bits - _MEMF_node) < (8 * sizeof(nodeid_t))); @@ -948,6 +949,8 @@ static struct page_info *alloc_heap_pages( if ( d != NULL ) d->last_alloc_node = node; + spin_unlock(&heap_lock); + for ( i = 0; i < (1 << order); i++ ) { /* Reference count must continuously be zero for free pages. */ @@ -957,7 +960,7 @@ static struct page_info *alloc_heap_pages( { if ( !(memflags & MEMF_no_scrub) ) scrub_one_page(&pg[i]); - node_need_scrub[node]--; + dirty_cnt++; } pg[i].count_info = PGC_state_inuse; @@ -979,6 +982,8 @@ static struct page_info *alloc_heap_pages( check_one_page(&pg[i]); } + spin_lock(&heap_lock); + node_need_scrub[node] -= dirty_cnt; spin_unlock(&heap_lock); if ( need_tlbflush ) -- 1.8.3.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |