|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 13/14] xl: enable for specifying node-affinity in the config file
On Mon, 2013-11-18 at 19:18 +0100, Dario Faggioli wrote:
> in a similar way to how it is possible to specify vcpu-affinity.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> ---
> Changes from v2:
> * use the new libxl API. Although the implementation changed
> only a little bit, I removed IanJ's Acked-by, although I am
> here saying that he did provided it, as requested.
> ---
> docs/man/xl.cfg.pod.5 | 27 ++++++++++++++--
> tools/libxl/libxl_dom.c | 3 +-
> tools/libxl/xl_cmdimpl.c | 79
> +++++++++++++++++++++++++++++++++++++++++++++-
> 3 files changed, 103 insertions(+), 6 deletions(-)
>
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 5dbc73c..733c74e 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -144,19 +144,40 @@ run on cpu #3 of the host.
> =back
>
> If this option is not specified, no vcpu to cpu pinning is established,
> -and the vcpus of the guest can run on all the cpus of the host.
> +and the vcpus of the guest can run on all the cpus of the host. If this
> +option is specified, the intersection of the vcpu pinning mask, provided
> +here, and the soft affinity mask, provided via B<cpus\_soft=> (if any),
> +is utilized to compute the domain node-affinity, for driving memory
> +allocations.
>
> If we are on a NUMA machine (i.e., if the host has more than one NUMA
> node) and this option is not specified, libxl automatically tries to
> place the guest on the least possible number of nodes. That, however,
> will not affect vcpu pinning, so the guest will still be able to run on
> -all the cpus, it will just prefer the ones from the node it has been
> -placed on. A heuristic approach is used for choosing the best node (or
> +all the cpus. A heuristic approach is used for choosing the best node (or
> set of nodes), with the goals of maximizing performance for the guest
> and, at the same time, achieving efficient utilization of host cpus
> and memory. See F<docs/misc/xl-numa-placement.markdown> for more
> details.
>
> +=item B<cpus_soft="CPU-LIST">
> +
> +Exactly as B<cpus=>, but specifies soft affinity, rather than pinning
> +(also called hard affinity). Starting from Xen 4.4, and if the credit
I don't think we need to reference particular versions in what is
effectively the manpage which comes with that version.
> +scheduler is used, this means the vcpus of the domain prefers to run
> +these pcpus. Default is either all pcpus or xl (via libxl) guesses
> +(depending on what other options are present).
No need to mention libxl here. TBH I would either document what the
other options which affect the guess are or not mention it at all, as it
stands the sentence doesn't tell me anything very useful.
> +
> +A C<CPU-LIST> is specified exactly as above, for B<cpus=>.
> +
> +If this option is not specified, the vcpus of the guest will not have
> +any preference regarding on what cpu to run, and the scheduler will
> +treat all the cpus where a vcpu can execute (if B<cpus=> is specified),
> +or all the host cpus (if not), the same. If this option is specified,
> +the intersection of the soft affinity mask, provided here, and the vcpu
> +pinning, provided via B<cpus=> (if any), is utilized to compute the
> +domain node-affinity, for driving memory allocations.
> +
> =back
>
> =head3 CPU Scheduling
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index a1c16b0..ceb37a3 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -236,7 +236,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> return rc;
> }
> libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> - libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
> + libxl_set_vcpuaffinity_all3(ctx, domid, info->max_vcpus, &info->cpumap,
> + &info->cpumap_soft);
>
> xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
> LIBXL_MAXMEM_CONSTANT);
> xs_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenstored/domid", NULL);
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index d5c4eb1..660bb1f 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -76,8 +76,9 @@ xlchild children[child_max];
> static const char *common_domname;
> static int fd_lock = -1;
>
> -/* Stash for specific vcpu to pcpu mappping */
> +/* Stash for specific vcpu to pcpu hard and soft mappping */
> static int *vcpu_to_pcpu;
> +static int *vcpu_to_pcpu_soft;
>
> static const char savefileheader_magic[32]=
> "Xen saved domain, xl format\n \0 \r";
> @@ -647,7 +648,8 @@ static void parse_config_data(const char *config_source,
> const char *buf;
> long l;
> XLU_Config *config;
> - XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *vtpms;
> + XLU_ConfigList *cpus, *cpus_soft, *vbds, *nics, *pcis;
> + XLU_ConfigList *cvfbs, *cpuids, *vtpms;
> XLU_ConfigList *ioports, *irqs, *iomem;
> int num_ioports, num_irqs, num_iomem;
> int pci_power_mgmt = 0;
> @@ -824,6 +826,50 @@ static void parse_config_data(const char *config_source,
> libxl_defbool_set(&b_info->numa_placement, false);
> }
>
> + if (!xlu_cfg_get_list (config, "cpus_soft", &cpus_soft, 0, 1)) {
How much of this block duplicates the parsing of the pinning field? Can
it be refactored?
> + int n_cpus = 0;
> +
> + if (libxl_node_bitmap_alloc(ctx, &b_info->cpumap_soft, 0)) {
> + fprintf(stderr, "Unable to allocate cpumap_soft\n");
> + exit(1);
> + }
> +
> + /* As above, use a temporary storage for the single affinities */
"use temporary storage..." (the "a" is redundant/sounds wierd)
> + vcpu_to_pcpu_soft = xmalloc(sizeof(int) * b_info->max_vcpus);
> + memset(vcpu_to_pcpu_soft, -1, sizeof(int) * b_info->max_vcpus);
> +
> + libxl_bitmap_set_none(&b_info->cpumap_soft);
> + while ((buf = xlu_cfg_get_listitem(cpus_soft, n_cpus)) != NULL) {
> + i = atoi(buf);
> + if (!libxl_bitmap_cpu_valid(&b_info->cpumap_soft, i)) {
> + fprintf(stderr, "cpu %d illegal\n", i);
> + exit(1);
> + }
> + libxl_bitmap_set(&b_info->cpumap_soft, i);
> + if (n_cpus < b_info->max_vcpus)
> + vcpu_to_pcpu_soft[n_cpus] = i;
> + n_cpus++;
> + }
> +
> + /* We have a soft affinity map, disable automatic placement */
> + libxl_defbool_set(&b_info->numa_placement, false);
> + }
> + else if (!xlu_cfg_get_string (config, "cpus_soft", &buf, 0)) {
> + char *buf2 = strdup(buf);
> +
> + if (libxl_node_bitmap_alloc(ctx, &b_info->cpumap_soft, 0)) {
> + fprintf(stderr, "Unable to allocate cpumap_soft\n");
> + exit(1);
> + }
> +
> + libxl_bitmap_set_none(&b_info->cpumap_soft);
> + if (vcpupin_parse(buf2, &b_info->cpumap_soft))
> + exit(1);
> + free(buf2);
> +
> + libxl_defbool_set(&b_info->numa_placement, false);
> + }
> +
> if (!xlu_cfg_get_long (config, "memory", &l, 0)) {
> b_info->max_memkb = l * 1024;
> b_info->target_memkb = b_info->max_memkb;
> @@ -2183,6 +2229,35 @@ start:
> free(vcpu_to_pcpu); vcpu_to_pcpu = NULL;
> }
>
> + /* And do the same for single vcpu to soft-affinity mapping */
Another option to refactor common code then?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |