[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/sched: fix onlining cpu with core scheduling active



On 10.03.2020 09:16, Jürgen Groß wrote:
> On 03.03.20 17:04, Jürgen Groß wrote:
>> On 03.03.20 14:31, Jan Beulich wrote:
>>> On 03.03.2020 13:27, Juergen Gross wrote:
>>>> --- a/xen/common/sched/cpupool.c
>>>> +++ b/xen/common/sched/cpupool.c
>>>> @@ -616,7 +616,8 @@ static int cpupool_cpu_add(unsigned int cpu)
>>>>       get_sched_res(cpu)->cpupool = NULL;
>>>>       cpus = sched_get_opt_cpumask(cpupool0->gran, cpu);
>>>> -    if ( cpumask_subset(cpus, &cpupool_free_cpus) )
>>>> +    if ( cpumask_subset(cpus, &cpupool_free_cpus) &&
>>>> +         cpumask_weight(cpus) >= cpupool_get_granularity(cpupool0) )
>>>
>>> Why >= , not == ? And is the other part of the condition needed?
>>
>> I can switch to ==.
>>
>>> Isn't this rather a condition that could be ASSERT()ed, as CPUs
>>> shouldn't move out of the "free" set before reaching the
>>> granularity?
>>
>> Probably, yes. I'll give it some testing and change it in the case
>> of (expected) success.
> 
> Thinking more about it I'm inclined to keep testing both conditions.
> In case we are supporting cpupools with different granularities we'll
> need to test for all cpus to be free in case the other sibling has been
> moved to a cpupool with gran=1 already.

Ah, yes, makes sense.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.