|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] page-alloc: Clamp get_free_buddy() to online nodes
d->node_affinity defaults to NODE_MASK_ALL which has bits set outside of
node_online_map. This in turn causes the loop in get_free_buddy() to waste
effort iterating over offline nodes.
Always clamp d->node_affinity to node_online_map when in use.
This in turn involves implementing nodes_copy() which was previously missing.
Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
CC: Jan Beulich <JBeulich@xxxxxxxx>
CC: Wei Liu <wl@xxxxxxx>
CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
CC: Julien Grall <julien.grall@xxxxxxx>
CC: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
This patch hides the issue identified in the "UBSAN report in find_next_bit()"
so probably doesn't want applying until that is resolved.
A lower overhead option would be to do:
nodes_and(nodemask, node_online_map, d ? d->node_affinity : node_online_map);
however this doesn't work because the nodeset_t API has a hidden &(param)
throughout the API. I've got half a mind to undo this nonsense and have
nodemask_t work in exactly the same way as cpumask_t.
---
xen/common/page_alloc.c | 7 ++++++-
xen/include/xen/nodemask.h | 6 ++++++
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 7825fd8c42..cec1b15d5b 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -811,11 +811,16 @@ static struct page_info *get_free_buddy(unsigned int
zone_lo,
const struct domain *d)
{
nodeid_t first_node, node = MEMF_get_node(memflags), req_node = node;
- nodemask_t nodemask = d ? d->node_affinity : node_online_map;
+ nodemask_t nodemask;
unsigned int j, zone, nodemask_retry = 0;
struct page_info *pg;
bool use_unscrubbed = (memflags & MEMF_no_scrub);
+ /* Clamp nodemask to node_online_map and optionally d->node_affinity. */
+ nodes_copy(nodemask, node_online_map);
+ if ( d )
+ nodes_and(nodemask, nodemask, d->node_affinity);
+
if ( node == NUMA_NO_NODE )
{
if ( d != NULL )
diff --git a/xen/include/xen/nodemask.h b/xen/include/xen/nodemask.h
index e287399352..e83cfe2439 100644
--- a/xen/include/xen/nodemask.h
+++ b/xen/include/xen/nodemask.h
@@ -189,6 +189,12 @@ static inline int __nodes_weight(const nodemask_t *srcp,
int nbits)
return bitmap_weight(srcp->bits, nbits);
}
+#define nodes_copy(dst, src) __nodes_copy(&(dst), &(src))
+static inline void __nodes_copy(nodemask_t *dst, nodemask_t *src)
+{
+ return bitmap_copy(dst->bits, src->bits, MAX_NUMNODES);
+}
+
#define nodes_shift_right(dst, src, n) \
__nodes_shift_right(&(dst), &(src), (n), MAX_NUMNODES)
static inline void __nodes_shift_right(nodemask_t *dstp,
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |