|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 05/10] xen/domain: Add DOMCTL handler for claiming memory with NUMA awareness
On Thu, Feb 26, 2026 at 02:29:19PM +0000, Bernhard Kaindl wrote:
> Add a DOMCTL handler for claiming memory with NUMA awareness. It
> rejects claims when LLC coloring (does not support claims) is enabled
> and translates the public constant to the internal NUMA_NO_NODE.
>
> The request is forwarded to domain_set_outstanding_pages() for the
> actual claim processing. The handler uses the same XSM hook as the
> legacy XENMEM_claim_pages hypercall.
>
> While the underlying infrastructure currently supports only a single
> claim, the public hypercall interface is designed to be extensible for
> multiple claims in the future without breaking the API.
>
> Signed-off-by: Bernhard Kaindl <bernhard.kaindl@xxxxxxxxxx>
> ---
> xen/common/domain.c | 29 ++++++++++++++++++++++++++++
> xen/common/domctl.c | 9 +++++++++
> xen/include/public/domctl.h | 38 +++++++++++++++++++++++++++++++++++++
> xen/include/xen/domain.h | 2 ++
> 4 files changed, 78 insertions(+)
>
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index e7861259a2b3..ac1b091f5574 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -268,6 +268,35 @@ int get_domain_state(struct xen_domctl_get_domain_state
> *info, struct domain *d,
> return rc;
> }
>
> +/* Claim memory for a domain or reset the claim */
> +int claim_memory(struct domain *d, const struct xen_domctl_claim_memory
> *uinfo)
> +{
> + memory_claim_t claim;
> +
> + /* alloc_color_heap_page() does not handle claims, so reject LLC
> coloring */
> + if ( llc_coloring_enabled )
> + return -EOPNOTSUPP;
> + /*
> + * We only support single claims at the moment, and if the domain is
> + * dying (d->is_dying is set), its claims have already been released
> + */
> + if ( uinfo->pad || uinfo->nr_claims != 1 || d->is_dying )
Iff we can move forward with this single node claim implementation,
the return code for uinfo->nr_claims != 1 needs to be -EOPNOTSUPP, to
differentiate the hypervisor doesn't support the operation vs there's
an error in the input parameters. That check needs to moved into
the previous if condition.
If the domain is dying we could also return -ESRCH, so that we can
differentiate the different error paths from the return code of the
hypercall.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |