[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch] Call sched_destroy_domain before cpupool_rm_domain.



On 11/4/2013 4:58 AM, Juergen Gross wrote:
> On 04.11.2013 10:26, Dario Faggioli wrote:
>> On lun, 2013-11-04 at 07:30 +0100, Juergen Gross wrote:
>>> On 04.11.2013 04:03, Nathan Studer wrote:
>>>> From: Nathan Studer <nate.studer@xxxxxxxxxxxxxxx>
>>>>
>>>> The domain destruction code, removes a domain from its cpupool
>>>> before attempting to destroy its scheduler information.  Since
>>>> the scheduler framework uses the domain's cpupool information
>>>> to decide on which scheduler ops to use, this results in the
>>>> the wrong scheduler's destroy domain function being called
>>>> when the cpupool scheduler and the initial scheduler are
>>>> different.
>>>>
>>>> Correct this by destroying the domain's scheduling information
>>>> before removing it from the pool.
>>>>
>>>> Signed-off-by: Nathan Studer <nate.studer@xxxxxxxxxxxxxxx>
>>>
>>> Reviewed-by: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
>>>
>> I think this is a candidate for backports too, isn't it?
>>
>> Nathan, what was happening without this patch? Are you able to quickly
>> figure out what previous Xen versions suffers from the same bug?

Various things:

If I used the credit scheduler in Pool-0 and the arinc653 scheduler in a cpupool
the other pool, it would:
1.  Hit a BUG_ON in the arinc653 scheduler.
2.  Hit an assert in the scheduling framework code.
3.  Or crash in the credit scheduler's csched_free_domdata function.

The latter clued me in that the wrong scheduler's destroy function was somehow
being called.

If I used the credit2 scheduler in the other pool, I would only ever see the 
latter.

Similarly, if I used the sedf scheduler in the other pool, I would only ever see
the latter.  However when using the sedf scheduler I would have to create and
destroy the domain twice, instead of just once.

> 
> In theory this bug is present since 4.1.
> 
> OTOH it will be hit only with arinc653 scheduler in a cpupool other than
> Pool-0. And I don't see how this is being supported by arinc653 today 
> (pick_cpu
> will always return 0).

Correct, the arinc653 scheduler currently does not work with cpupools.  We are
working on remedying that though, which is how I ran into this.  I would have
just wrapped this patch in with the upcoming arinc653 ones, if I had not run
into the same issue with the other schedulers.

> 
> All other schedulers will just call xfree() for the domain specific data (and
> may be update some statistic data, which is not critical).

The credit and credit2 schedulers do a bit more than that in their free_domdata
functions.

The credit scheduler frees the node_affinity_cpumask contained in the domain
data and the credit2 scheduler deletes a list element contained in the domain
data.  Since with this bug they are accessing structures that do not belong to
them, bad things happen.

With the credit scheduler in Pool-0, the result should be an invalid free and an
eventual crash.

With the credit2 scheduler in Pool-0, the effects might be a be more
unpredictable.  At best it should result in an invalid pointer dereference.

Likewise, since the other schedulers do not do this additional work, there would
probably be other issues if the sedf or arinc653 scheduler was running in Pool-0
and one of the credit schedulers was run in the other pool.  I do not know
enough about the credit scheduler to make any predictions about what would
happen though.

> 
> 
> Juergen
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.