|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 05/10] xen/domain: Add DOMCTL handler for claiming memory with NUMA awareness
On 26.02.2026 15:29, Bernhard Kaindl wrote:
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -268,6 +268,35 @@ int get_domain_state(struct xen_domctl_get_domain_state
> *info, struct domain *d,
> return rc;
> }
>
> +/* Claim memory for a domain or reset the claim */
> +int claim_memory(struct domain *d, const struct xen_domctl_claim_memory
> *uinfo)
static in domctl.c? Otherwise with Penny's work to make domctl optional this
would be unreachable code.
> +{
> + memory_claim_t claim;
> +
> + /* alloc_color_heap_page() does not handle claims, so reject LLC
> coloring */
> + if ( llc_coloring_enabled )
> + return -EOPNOTSUPP;
> + /*
> + * We only support single claims at the moment, and if the domain is
> + * dying (d->is_dying is set), its claims have already been released
> + */
> + if ( uinfo->pad || uinfo->nr_claims != 1 || d->is_dying )
> + return -EINVAL;
As already alluded to in reply to patch 03, I can't help the impression that
usage of this sub-op with multiple entries would we quite different (i.e. it
would be not only the implementation in Xen that changes). I'm therefore
pretty uncertain whether taking it with this restriction is going to make
much sense.
> + if ( copy_from_guest(&claim, uinfo->claims, 1) )
> + return -EFAULT;
> +
> + if ( claim.pad )
> + return -EINVAL;
> +
> + /* Convert the API tag for a host-wide claim to the NUMA_NO_NODE
> constant */
> + if ( claim.node == XEN_DOMCTL_CLAIM_MEMORY_NO_NODE )
> + claim.node = NUMA_NO_NODE;
What about the incoming claim.node being NUMA_NO_NODE? Imo the range checking
the previous patch adds to domain_set_outstanding_pages() wants to move here,
at which point the function's new parameter could be properly nodeid_t.
> + /* NB. domain_set_outstanding_pages() has the checks to validate its
> args */
> + return domain_set_outstanding_pages(d, claim.pages, claim.node);
> +}
There's no copying back of the result. When this is extended to allow more
than one entry, what's the plan towards dealing with partial success? Needing
to roll back may be unwieldy.
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1276,6 +1276,42 @@ struct xen_domctl_get_domain_state {
> uint64_t unique_id; /* Unique domain identifier. */
> };
>
> +/*
> + * XEN_DOMCTL_claim_memory
> + *
> + * Claim memory for a guest domain. The claimed memory is converted into
> actual
> + * memory pages by allocating it. Except for the option to pass claims for
> + * multiple NUMA nodes, the semantics are based on host-wide claims as
> + * provided by XENMEM_claim_pages, and are identical for host-wide claims.
> + *
> + * The initial implementation supports a claim for the host or a NUMA node,
> but
> + * using an array, the API is designed to be extensible to support more
> claims.
> + */
> +struct xen_memory_claim {
> + uint64_aligned_t pages; /* Amount of pages to be allotted to the
> domain */
> + uint32_t node; /* NUMA node, or XEN_DOMCTL_CLAIM_MEMORY_NO_NODE for
> host */
> + uint32_t pad; /* padding for alignment, set to 0 on
> input */
This isn't for alignment; it's there to make the padding explicit.
> +};
> +typedef struct xen_memory_claim memory_claim_t;
> +#define XEN_DOMCTL_CLAIM_MEMORY_NO_NODE 0xFFFFFFFF /* No node: host
> claim */
Misra demands a U suffix here.
"host claim" (in the comment) also is ambiguous. Per-node claims also affect
the host. Maybe "host wide" or "global"?
> +/* Use XEN_NODE_CLAIM_INIT to initialize a memory_claim_t structure */
> +#define XEN_NODE_CLAIM_INIT(_pages, _node) { \
> + .pages = (_pages), \
> + .node = (_node), \
> + .pad = 0 \
> +}
While only a macro, it's still not C89, and hence may wants offering only as
an extension. Also .pad doesn't need explicitly specifying, does it? If you
provide such a macro, identifiers used also need to strictly conform to the
C spec (IOW leading underscores aren't permitted).
> +DEFINE_XEN_GUEST_HANDLE(memory_claim_t);
This wants to move up next to the typedef.
> +struct xen_domctl_claim_memory {
> + /* IN: array of struct xen_memory_claim */
> + XEN_GUEST_HANDLE_64(memory_claim_t) claims;
> + /* IN: number of claims in the claims array handle. See the claims
> field. */
> + uint32_t nr_claims;
Is repeating the word "claim" necessary / useful here?
> +#define XEN_DOMCTL_MAX_CLAIMS UINT8_MAX /* More claims require changes in
> Xen */
> + uint32_t pad; /* padding for alignment, set it to
> 0 */
Same comment as on the other pad field.
> @@ -1368,6 +1404,7 @@ struct xen_domctl {
> #define XEN_DOMCTL_gsi_permission 88
> #define XEN_DOMCTL_set_llc_colors 89
> #define XEN_DOMCTL_get_domain_state 90 /* stable interface */
> +#define XEN_DOMCTL_claim_memory 91
Seeing the adjacent comment, did you consider making this new sub-op a stable
one
as well?
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |