[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/5] x86: allow specifying the NUMA nodes Dom0 should run on



On Fri, 2015-02-27 at 08:46 +0000, Jan Beulich wrote:
> >>> On 26.02.15 at 18:14, <dario.faggioli@xxxxxxxxxx> wrote:
> > On Thu, 2015-02-26 at 13:52 +0000, Jan Beulich wrote:
> >> +### dom0\_nodes
> >> +
> >> +> `= <integer>[,...]`
> >> +
> >> +Specify the NUMA nodes to place Dom0 on. Defaults for vCPU-s created
> >> +and memory assigned to Dom0 will be adjusted to match the node
> >> +restrictions set up here. Note that the values to be specified here are
> >> +ACPI PXM ones, not Xen internal node numbers.
> >> +
> > Why use PXM ids? It might be me being much more used to work with NUMA
> > node ids, but wouldn't the other way round be more consistent (almost
> > everything the user interacts with after boot speak node ids) and easier
> > for the user to figure things out (e.g., with tools like numactl on
> > baremetal)?
> 
> This way behavior doesn't change if internally in the hypervisor we
> need to change the mapping from PXMs to node IDs.
> 
Ok, I see the value of this. I'm still a bit concerned about the fact
that everything else "speak" NUMA node, but it's probably just me being
much more used to that than to PXMs. :-)

> >> +static struct vcpu *__init setup_vcpu(struct domain *d, unsigned int 
> >> vcpu_id,
> >> +                                      unsigned int cpu)
> >> +{
> >> +    struct vcpu *v = alloc_vcpu(d, vcpu_id, cpu);
> >> +
> >> +    if ( v )
> >> +    {
> >> +        if ( !d->is_pinned )
> >> +            cpumask_copy(v->cpu_hard_affinity, &dom0_cpus);
> >> +        cpumask_copy(v->cpu_soft_affinity, &dom0_cpus);
> >> +    }
> >> +
> > About this, for DomUs, now that we have soft affinity available, what we
> > do is set only soft affinity to match the NUMA placement. I think I see
> > and agree why we want to be 'more strict' in Dom0, but I felt like it
> > was worth to point out the difference in behaviour (should it be
> > documented somewhere?).
> 
> I'm simply adjusting what sched_init_vcpu() did, which is alter
> hard affinity conditionally on is_pinned and soft affinity
> unconditionally.
> 
Ok, I understand the idea behing this better now, thanks.

So, with the following boot command line (i.e., with 'dom0_vcpus_pin'):
com1=115200,8n1 dom0_vcpus_pin dom0_nodes=1 sched=credit noreboot 
dom0_mem=512M,max:512M dom0_max_vcpus=4 console=com1

I get this:
Name                                ID  VCPU   CPU State   Time(s) Affinity 
(Hard / Soft)
Domain-0                             0     0    8   -b-       4.4  8 / 8-15
Domain-0                             0     1    9   -b-       4.0  9 / 8-15
Domain-0                             0     2   10   r--       4.1  10 / 8-15
Domain-0                             0     3   11   -b-       3.6  11 / 8-15

With the following boot command line (i.e., without 'dom0_vcpus_pin'):
com1=115200,8n1 dom0_nodes=1 sched=credit noreboot dom0_mem=512M,max:512M 
dom0_max_vcpus=4 console=com1

I get this:
Name                                ID  VCPU   CPU State   Time(s) Affinity 
(Hard / Soft)
Domain-0                             0     0    8   r--       4.3  8-15 / 8-15
Domain-0                             0     1   12   -b-       2.9  8-15 / 8-15
Domain-0                             0     2   10   r--       3.5  8-15 / 8-15
Domain-0                             0     3   14   -b-       2.9  8-15 / 8-15

Setting soft affinity as a superset of (in the former case) or equal to
(in the latter) hard affinity is just pure overhead, when in the
scheduler.

In fact, if the scheduler sees that soft affinity is defined, it will go
through the load balancing/vcpu placement logic twice, the first time
using the soft affinity mask, the second using the hard affinity one.
Actually, the first time it uses 'soft & hard', which in these cases is
exactly equal to hard, and that's why I'm calling this pure overhead.

I probably should add checks in the scheduler to identify such
situations as "no need to consider soft affinity". I thought about this
before, but didn't do that because it's a more cpumask_foo() fiddling in
a few hot paths... but of course I can check for the relationship
between hard and soft affinity masks upfront, cache the result in a
bool_t, and use _that_ in hot paths... what do you think?

All this being said, I still would avoid putting the system in a
configuration where soft is superset or equal to hard, at the very least
not automatically, as I think it can appear confusing to the user (the
user himself can, of course, do that after boot, for Dom0 or DomUs, but
that's another story, I think). So I'm now thinking whether it wouldn't
be better to, in this patch, leave soft affinity alone completely.

Then, if we want to make it possible to tweak soft affinity, we can
allow for something like "dom0_nodes=soft:1,3" and, in that case, alter
soft affinity only.

> > BTW, mostly out of curiosity, I've had a few strange issues/conflicts in
> > applying this on top of staging, in order to test it... Was it me doing
> > something very stupid, or was this based on something different?
> 
> Apart from the one patch named in the cover letter there shouldn't
> be any other dependencies. Without you naming the issues you
> encountered, I can't tell.
> 
I see. Never mind then, maybe I messed up with my various branches...
Sorry for bothering with this. :-)

Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.