[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/5] x86: allow specifying the NUMA nodes Dom0 should run on

>>> On 27.02.15 at 15:54, <dario.faggioli@xxxxxxxxxx> wrote:
> On Fri, 2015-02-27 at 10:50 +0000, Jan Beulich wrote:
>> >>> On 27.02.15 at 11:04, <dario.faggioli@xxxxxxxxxx> wrote:
>> > On Fri, 2015-02-27 at 08:46 +0000, Jan Beulich wrote:
>> >> I'm simply adjusting what sched_init_vcpu() did, which is alter
>> >> hard affinity conditionally on is_pinned and soft affinity
>> >> unconditionally.
>> >> 
>> > Ok, I understand the idea behing this better now, thanks.
>> > [...]
>> > Setting soft affinity as a superset of (in the former case) or equal to
>> > (in the latter) hard affinity is just pure overhead, when in the
>> > scheduler.
>> The why does sched_init_vcpu() do what it does? If you want to
>> alter that, I'm fine with altering it here.
> It does that, but, in there, soft affinity is unconditionally set to
> 'all bits set'. Then, in the scheduler, if we find out that the the soft
> affinity mask is fully set, we just skip the soft affinity balancing
> step.
> The idea is that, whether the mask is full because no one touched this
> default, or because it has been manually set like that, there is nothing
> to do at the soft affinity balancing level.
> So, you actually are right: rather that not touch soft affinity, as I
> said in the previous email, I think we should set hard affinity
> conditionally to is_pinned, as in the patch, and then unconditionally
> set soft affinity to all, as in sched_init_vcpu().

I.e. effectively not touching it anyway (because just before it
got set to "all" by sched_init_vcpu()). I guess instead of
removing the line, I'll put it in a comment.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.