[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/sched: fix cpu offlining with core scheduling



On 03.03.2020 17:20, Jürgen Groß wrote:
> On 03.03.20 17:05, Jürgen Groß wrote:
>> On 03.03.20 14:45, Jan Beulich wrote:
>>> On 03.03.2020 13:30, Juergen Gross wrote:
>>>> @@ -2538,7 +2552,10 @@ static void sched_slave(void)
>>>>       next = sched_wait_rendezvous_in(prev, &lock, cpu, now);
>>>>       if ( !next )
>>>> +    {
>>>> +        rcu_read_unlock(&sched_res_rculock);
>>>>           return;
>>>> +    }
>>>
>>> This and ...
>>>
>>>> @@ -2599,7 +2616,10 @@ static void schedule(void)
>>>>           cpumask_raise_softirq(mask, SCHED_SLAVE_SOFTIRQ);
>>>>           next = sched_wait_rendezvous_in(prev, &lock, cpu, now);
>>>>           if ( !next )
>>>> +        {
>>>> +            rcu_read_unlock(&sched_res_rculock);
>>>>               return;
>>>> +        }
>>>
>>> ... this look like independent fixes, as on Arm,
>>> sched_wait_rendezvous_in() can already return NULL. If they get
>>> folded into here, I think the description should clarify that
>>> these are orthogonal to the rest.
>>
>> Yeah, probably better to split the patch.
> 
> Oh, this patch was wrong: Up to now sched_wait_rendezvous_in() always
> was responsible for dropping sched_res_rculock, so I should do that in
> the new NULL return case, too.

Oh, through calling of sched_context_switch(). I guess both functions
want to gain a comment about this aspect of their behavior.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.