|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen stable-4.19] xen/mm: move adjustment of claimed pages counters on allocation
commit 999fdfb104cfbdfb5e4c495e6d065e63c1a51c6d
Author: Roger Pau Monné <roger.pau@xxxxxxxxxx>
AuthorDate: Tue Jan 13 15:50:36 2026 +0100
Commit: Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Jan 13 15:50:36 2026 +0100
xen/mm: move adjustment of claimed pages counters on allocation
The current logic splits the update of the amount of available memory in
the system (total_avail_pages) and pending claims into two separately
locked regions. This leads to a window between counters adjustments where
the result of total_avail_pages - outstanding_claims doesn't reflect the
real amount of free memory available, and can return a negative value due
to total_avail_pages having been updated ahead of outstanding_claims.
Fix by adjusting outstanding_claims and d->outstanding_pages in the same
place where total_avail_pages is updated. Note that accesses to
d->outstanding_pages is protected by the global heap_lock, just like
total_avail_pages or outstanding_claims. Add a comment to the field
declaration, and also adjust the comment at the top of
domain_set_outstanding_pages() to be clearer in that regard.
Note that failures in assign_pages() causes the claimed amount that has
been allocated to be lost, as the amount is not added back to the domain
quota once pages are freed. Given the intended usage of claims is limited
to initial physmap populate, and the current failure paths in
assign_pages() would lead to the domain being destroyed anyway, don't
add extra logic to recover the claimed amount on failure - it's just adding
complexity for no real benefit.
Fixes: 65c9792df600 ("mmu: Introduce XENMEM_claim_pages (subop of memory
ops)")
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
master commit: 75d29d9b5e226bafa0fbf9fba25623229660b81e
master date: 2026-01-08 11:05:30 +0100
---
xen/common/page_alloc.c | 56 ++++++++++++++++++++++++-------------------------
xen/include/xen/sched.h | 3 ++-
2 files changed, 30 insertions(+), 29 deletions(-)
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 054b7edb39..bbb8578459 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -475,34 +475,9 @@ static long outstanding_claims; /* total outstanding
claims by all domains */
unsigned long domain_adjust_tot_pages(struct domain *d, long pages)
{
- long dom_before, dom_after, dom_claimed, sys_before, sys_after;
-
ASSERT(rspin_is_locked(&d->page_alloc_lock));
d->tot_pages += pages;
- /*
- * can test d->claimed_pages race-free because it can only change
- * if d->page_alloc_lock and heap_lock are both held, see also
- * domain_set_outstanding_pages below
- */
- if ( !d->outstanding_pages )
- goto out;
-
- spin_lock(&heap_lock);
- /* adjust domain outstanding pages; may not go negative */
- dom_before = d->outstanding_pages;
- dom_after = dom_before - pages;
- BUG_ON(dom_before < 0);
- dom_claimed = dom_after < 0 ? 0 : dom_after;
- d->outstanding_pages = dom_claimed;
- /* flag accounting bug if system outstanding_claims would go negative */
- sys_before = outstanding_claims;
- sys_after = sys_before - (dom_before - dom_claimed);
- BUG_ON(sys_after < 0);
- outstanding_claims = sys_after;
- spin_unlock(&heap_lock);
-
-out:
return d->tot_pages;
}
@@ -512,9 +487,10 @@ int domain_set_outstanding_pages(struct domain *d,
unsigned long pages)
unsigned long claim, avail_pages;
/*
- * take the domain's page_alloc_lock, else all d->tot_page adjustments
- * must always take the global heap_lock rather than only in the much
- * rarer case that d->outstanding_pages is non-zero
+ * Two locks are needed here:
+ * - d->page_alloc_lock: protects accesses to d->{tot,max,extra}_pages.
+ * - heap_lock: protects accesses to d->outstanding_pages,
total_avail_pages
+ * and outstanding_claims.
*/
nrspin_lock(&d->page_alloc_lock);
spin_lock(&heap_lock);
@@ -1014,6 +990,30 @@ static struct page_info *alloc_heap_pages(
total_avail_pages -= request;
ASSERT(total_avail_pages >= 0);
+ if ( d && d->outstanding_pages && !(memflags & MEMF_no_refcount) )
+ {
+ /*
+ * Adjust claims in the same locked region where total_avail_pages is
+ * adjusted, not doing so would lead to a window where the amount of
+ * free memory (avail - claimed) would be incorrect.
+ *
+ * Note that by adjusting the claimed amount here it's possible for
+ * pages to fail to be assigned to the claiming domain while already
+ * having been subtracted from d->outstanding_pages. Such claimed
+ * amount is then lost, as the pages that fail to be assigned to the
+ * domain are freed without replenishing the claim. This is fine given
+ * claims are only to be used during physmap population as part of
+ * domain build, and any failure in assign_pages() there will result in
+ * the domain being destroyed before creation is finished. Losing part
+ * of the claim makes no difference.
+ */
+ unsigned long outstanding = min(d->outstanding_pages + 0UL, request);
+
+ BUG_ON(outstanding > outstanding_claims);
+ outstanding_claims -= outstanding;
+ d->outstanding_pages -= outstanding;
+ }
+
check_low_mem_virq();
if ( d != NULL )
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 2dcd1d1a4f..2a83b9dacf 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -393,7 +393,8 @@ struct domain
unsigned int tot_pages;
unsigned int xenheap_pages; /* pages allocated from Xen heap */
- unsigned int outstanding_pages; /* pages claimed but not possessed */
+ /* Pages claimed but not possessed, protected by global heap_lock. */
+ unsigned int outstanding_pages;
unsigned int max_pages; /* maximum value for
domain_tot_pages() */
unsigned int extra_pages; /* pages not included in
domain_tot_pages() */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.19
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |