[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/5] mm: Don't hold heap lock in alloc_heap_pages() longer than necessary



>>> On 30.08.17 at 11:27, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 29/08/17 18:09, Boris Ostrovsky wrote:
>> Once pages are removed from the heap we don't need to hold the heap
>> lock. It is especially useful to drop it for an unscrubbed buddy since
>> we will be scrubbing it.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
>> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> Changes in v2:
>> * Moved spin_unlock() after d->last_alloc_node assignment
>>
>>  xen/common/page_alloc.c | 7 ++++++-
>>  1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index 6c08983..8df92aa 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -860,6 +860,7 @@ static struct page_info *alloc_heap_pages(
>>      struct page_info *pg;
>>      bool need_tlbflush = false;
>>      uint32_t tlbflush_timestamp = 0;
>> +    unsigned int dirty_cnt = 0;
>>  
>>      /* Make sure there are enough bits in memflags for nodeID. */
>>      BUILD_BUG_ON((_MEMF_bits - _MEMF_node) < (8 * sizeof(nodeid_t)));
>> @@ -948,6 +949,8 @@ static struct page_info *alloc_heap_pages(
>>      if ( d != NULL )
>>          d->last_alloc_node = node;
>>  
>> +    spin_unlock(&heap_lock);
>> +
>>      for ( i = 0; i < (1 << order); i++ )
>>      {
>>          /* Reference count must continuously be zero for free pages. */
>> @@ -957,7 +960,7 @@ static struct page_info *alloc_heap_pages(
>>          {
>>              if ( !(memflags & MEMF_no_scrub) )
>>                  scrub_one_page(&pg[i]);
>> -            node_need_scrub[node]--;
>> +            dirty_cnt++;
>>          }
>>  
>>          pg[i].count_info = PGC_state_inuse;
>> @@ -979,6 +982,8 @@ static struct page_info *alloc_heap_pages(
>>              check_one_page(&pg[i]);
>>      }
>>  
>> +    spin_lock(&heap_lock);
>> +    node_need_scrub[node] -= dirty_cnt;
>>      spin_unlock(&heap_lock);
>>  
>>      if ( need_tlbflush )
> 
> This patch has been applied to staging, but its got problems.  The
> following crash is rather trivial to provoke:
> 
> ~Andrew
> 
> (d19) Test result: SUCCESS
> (XEN) ----[ Xen-4.10-unstable  x86_64  debug=y   Tainted:    H ]----
> (XEN) CPU:    5
> (XEN) RIP:    e008:[<ffff82d0802252fc>] 
> page_alloc.c#free_heap_pages+0x786/0x7a1
>...
> (XEN) Pagetable walk from ffff82ffffffffe4:
> (XEN)  L4[0x105] = 00000000abe5b063 ffffffffffffffff
> (XEN)  L3[0x1ff] = 0000000000000000 ffffffffffffffff

Some negative offset into somewhere, it seems. Upon second
look I think the patch is simply wrong in its current shape:
free_heap_pages() looks for page_state_is(..., free) when
trying to merge chunks, while alloc_heap_pages() now sets
PGC_state_inuse outside of the locked area. I'll revert it right
away.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.