[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Shutdown panic in disable_nonboot_cpus after cpupool-numa-split



On 07.07.2014 14:38, Jürgen Groß wrote:
> On 07/07/2014 02:00 PM, Andrew Cooper wrote:
>> On 07/07/14 12:33, Stefan Bader wrote:
>>> I recently noticed that I get a  panic (rebooting the system) on shutdown in
>>> some
>>  > cases. This happened only on my AMD system and also not all the time.
>> Finally
>>  > realized that it is related to the use of using cpupool-numa-split
>>  > (libxl with xen-4.4 maybe, but not 100% sure 4.3 as well).
>>  >
>>  > What happens is that on shutdown the hypervisor runs
>> disable_nonboot_cpus which
>>  > call cpu_down for each online cpu. There is a BUG_ON in the code for
>> the case of
>>  > cpu_down returning -EBUSY. This happens in my case as soon as the
>> first cpu that
>>  > has been moved to pool-1 by cpupool-numa-split is attempted. The error is
>>  > returned by running the notifier_call_chain and I suspect that ends
>> up calling
>>  > cpupool_cpu_remove which always returns EBUSY for cpus not in pool0.
>>  >
>>  > I am not sure which end needs to be fixed but looping over all online
>> cpus in
>>  > disable_nonboot_cpus sounds plausible. So maybe the check for pool-0 in
>>  > cpupool_cpu_remove is wrong...?
>>  >
>>  > -Stefan
>>
>> Hmm yes - this looks completely broken.
>>
>> cpupool_cpu_remove() only has a single caller which is from cpu_down(),
>> and will unconditionally fail for cpus outside of the default pool.
>>
>> It is not obvious at all how this is supposed to work, and the comment
>> beside cpupool_cpu_remove() doesn't help.
>>
>> Can you try the following (only compile tested) patch, which looks
>> plausibly like it might DTRT.  The for_each_cpupool() is a little nastly
>> but there appears to be no cpu_to_cpupool mapping available.
> 
> Your patch has the disadvantage to support hot-unplug of the last cpu in
> a cpupool. The following should work, however:

Disadvantage and support sounded a bit confusing. But I think it means
hot-unplugging the last cpu of a pool is bad and should not be working.

> 
> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
> index 4a0e569..73249d3 100644
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -471,12 +471,24 @@ static void cpupool_cpu_add(unsigned int cpu)
>   */
>  static int cpupool_cpu_remove(unsigned int cpu)
>  {
> -    int ret = 0;
> +    int ret = -EBUSY;
> +    struct cpupool **c;
> 
>      spin_lock(&cpupool_lock);
> -    if ( !cpumask_test_cpu(cpu, cpupool0->cpu_valid))
> -        ret = -EBUSY;
> +    if ( cpumask_test_cpu(cpu, cpupool0->cpu_valid) )
> +        ret = 0;
>      else
> +    {
> +        for_each_cpupool(c)
> +        {
> +            if ( cpumask_test_cpu(cpu, (*c)->cpu_suspended ) )

The rest seems to keep the semantics the same as before (though does that mean
unplugging the last cpu of pool-0 is ok?) But why testing for suspended here to
succeed (and not valid)?


> +            {
> +                ret = 0;
> +                break;
> +            }
> +        }
> +    }
> +    if ( !ret )
>          cpumask_set_cpu(cpu, &cpupool_locked_cpus);
>      spin_unlock(&cpupool_lock);
> 
> 
> 
> Juergen


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.