[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH 11/11] docs/man: Document the new claim_on_node option
... and while at it, add missing relevant links to xl-numa-placement(7) in xl.1.pod.in and xl.cfg.5.pod.in, which describes libxl's behaviour on NUMA placement. Signed-off-by: Alejandro Vallejo <alejandro.vallejo@xxxxxxxxx> --- docs/man/xl-numa-placement.7.pod | 8 ++++++++ docs/man/xl.1.pod.in | 2 +- docs/man/xl.cfg.5.pod.in | 14 ++++++++++++++ 3 files changed, 23 insertions(+), 1 deletion(-) diff --git a/docs/man/xl-numa-placement.7.pod b/docs/man/xl-numa-placement.7.pod index 4d83f26d412e..287ad41e5071 100644 --- a/docs/man/xl-numa-placement.7.pod +++ b/docs/man/xl-numa-placement.7.pod @@ -173,6 +173,14 @@ soft affinity belong. =back +It's possible to force memory to be allocated from a specific NUMA node via the +"claim_on_node" option (See B<claim_mode> in L<xl.conf(5)> for more details on +claims). "claim_on_node" associates the domain with a single NUMA node. Domain +creation fails if not enough memory can be reserved in the node and memory is +preferentially allocated from that node at runtime. The downside is that +claiming memory on a node via "claim_on_node" doesn't automatically set +soft-affinity for the vCPUs, so that must still be done manually for a fully +optimised single-node domain. =head2 Placing the guest automatically diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in index fe38724b2b82..27a972486296 100644 --- a/docs/man/xl.1.pod.in +++ b/docs/man/xl.1.pod.in @@ -2012,7 +2012,7 @@ otherwise behavior is undefined. Setting to 0 disables the timeout. The following man pages: L<xl.cfg(5)>, L<xlcpupool.cfg(5)>, L<xentop(1)>, L<xl-disk-configuration(5)> -L<xl-network-configuration(5)> +L<xl-network-configuration(5)>, L<xl-numa-placement(7)> And the following documents on the xenproject.org website: diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in index 7339c44efd54..c1ffc29d312a 100644 --- a/docs/man/xl.cfg.5.pod.in +++ b/docs/man/xl.cfg.5.pod.in @@ -278,6 +278,18 @@ memory=8096 will report significantly less memory available for use than a system with maxmem=8096 memory=8096 due to the memory overhead of having to track the unused pages. +=item B<claim_on_node=NUMBER> + +Binds guest memory to a particular host NUMA node. Creation only starts if +enough memory on this NUMA node can be reserved beforehand on the hypervisor. +Failure to claim memory aborts creation early. + +See B<claim_mode> in L<xl.conf(5)> for further details on memory claims. + +Overriding B<claim_on_node> forces B<claim_mode> to be set on this guest and +disables automatic NUMA placement (See L<xl-numa-placement(7)> for further +details on NUMA placement and the effects of this option.) + =back =head3 Guest Virtual NUMA Configuration @@ -3143,6 +3155,8 @@ If using this option is necessary to fix an issue, please report a bug. =item L<xl-network-configuration(5)> +=item L<xl-numa-placement(7)> + =item L<xen-tscmode(7)> =back -- 2.48.1
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |