[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v7 1/3] xen/mm: Introduce per-node free page counter



From: Alejandro Vallejo <alejandro.vallejo@xxxxxxxxx>

Add node_avail_pages[], updated under heap_lock in sync with
avail[node][zone] to cache the per-node sum of free pages.

Use it in avail_node_heap_pages() to avoid summing all zones on each
call. Guard it with nodeid < MAX_NUMNODES and node_online(nodeid).

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@xxxxxxxxx>
Signed-off-by: Bernhard Kaindl <bernhard.kaindl@xxxxxxxxxx>
---
This patch was originally sent by Alejandro Vallejo:
https://lists.xenproject.org/archives/html/xen-devel/2025-03/msg01130.html

I use node_avail_pages[] in avail_node_heap_pages() as an optimisation.

Verification of the changes:

1. node_avail_pages[node] is updated whenever avail[node][zone] changes,
   so the two remain in sync.

2. avail_node_heap_pages() previously summed all zones of a node and now
   returns node_avail_pages[node], so the same free buddy pages are
   counted.

3. avail_node_heap_pages() returns 0 for offline nodes and for nodes
   >= MAX_NUMNODES as before.

4. avail_node_heap_pages(-1) returned the sum from all nodes, equal
   to total_avail_pages, but this is not used by current callers.
   avail_heap_pages(z, z, -1) is used by other callers for that instead.
   To avoid dead code, a check for -1 to implement this is not added.

Update locations:

- free_heap_pages() increments node_avail_pages[node] alongside
  avail[node][zone] when pages are freed, including during heap
  initialisation.

- alloc_heap_pages() decrements node_avail_pages[node] alongside
  avail[node][zone] when pages are allocated.

- reserve_offlined_page() decrements node_avail_pages[node] alongside
  avail[node][zone] when pages are offlined.

Colored pages do not go through the buddy allocator.
Since they do not update avail[node][zone], they are
not reflected in node_avail_pages[node] either.

N.B. Current callers already iterate over online nodes only.

Changes since v6:
- Preserved the 0 return for offline nodes and nodes >= MAX_NUMNODES.
---
 xen/common/page_alloc.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 2c4ff2c34c70..7e17bafa1e45 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -485,6 +485,7 @@ static unsigned long node_need_scrub[MAX_NUMNODES];
 
 static unsigned long *avail[MAX_NUMNODES];
 static unsigned long total_avail_pages;
+static unsigned long node_avail_pages[MAX_NUMNODES];
 
 static DEFINE_SPINLOCK(heap_lock);
 /* Total outstanding claims by all domains */
@@ -1050,6 +1051,8 @@ static struct page_info *alloc_heap_pages(
     avail[node][zone] -= request;
     ASSERT(total_avail_pages >= request);
     total_avail_pages -= request;
+    ASSERT(node_avail_pages[node] >= request);
+    node_avail_pages[node] -= request;
 
     if ( d && d->outstanding_pages && !(memflags & MEMF_no_refcount) )
     {
@@ -1243,6 +1246,8 @@ static int reserve_offlined_page(struct page_info *head)
         avail[node][zone]--;
         ASSERT(total_avail_pages > 0);
         total_avail_pages--;
+        ASSERT(node_avail_pages[node] > 0);
+        node_avail_pages[node]--;
 
         page_list_add_tail(cur_head,
                            test_bit(_PGC_broken, &cur_head->count_info) ?
@@ -1566,6 +1571,7 @@ static void free_heap_pages(
 
     avail[node][zone] += 1 << order;
     total_avail_pages += 1 << order;
+    node_avail_pages[node] += 1 << order;
     if ( need_scrub )
     {
         node_need_scrub[node] += 1 << order;
@@ -2831,7 +2837,9 @@ unsigned long avail_domheap_pages_region(
 
 unsigned long avail_node_heap_pages(unsigned int nodeid)
 {
-    return avail_heap_pages(MEMZONE_XEN, NR_ZONES -1, nodeid);
+    if ( nodeid < MAX_NUMNODES && node_online(nodeid) )
+        return node_avail_pages[nodeid];
+    return 0;
 }
 
 
-- 
2.39.5




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.