[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen4.2 S3 regression?



I went back to an old patch that had, since it was in this same function that you made reference to:
http://markmail.org/message/qpnmiqzt5bngeejk

I noticed that I was not seeing the "Breaking vcpu affinity" printk - so I tried to get that 

The change proposed in that thread seems to work around this pinning problem.
However, I'm not sure that it is the "right thing" to be doing.

Do you have any thoughts on this?


On Tue, Sep 25, 2012 at 7:56 AM, Ben Guthro <ben@xxxxxxxxxx> wrote:

On Tue, Sep 25, 2012 at 3:00 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>> On 24.09.12 at 23:12, Ben Guthro <ben@xxxxxxxxxx> wrote:
> Here's my "Big hammer" debugging patch.
>
> If I force the cpu to be scheduled on CPU0 when the appropriate cpu is not
> online, I can resume properly.
>
> Clearly this is not the proper solution, and I'm sure the fix is subtle.
> I'm not seeing it right now though. Perhaps tomorrow morning.
> If you have any ideas, I'm happy to run tests then.

I can't see how the printk() you add in the patch would ever get
reached with the other adjustment you do there.

Apologies. I failed to separate prior debugging in this patch from the "big hammer" fix
 
A debug build,
as Keir suggested, would not only get the stack trace right, but
would also result in the ASSERT() right after your first modification
to _csched_cpu_pick() to actually do something (and likely trigger).

Indeed. I was using non-debug builds for 2 reasons that, in hindsight may not be the best of reasons.
1. It was the default
2. Mukesh's kdb debugger requires debug to be off, which I was making use of previously, and had not disabled.
 
The stack from a debug build can be found below.
It did, indeed trigger the ASSERT, as you predicted.


(XEN) Finishing wakeup from ACPI S3 state.
(XEN) Enabling non-boot CPUs  ...
(XEN) Booting processor 1/1 eip 8a000
(XEN) Initializing CPU#1
(XEN) CPU: L1 I cache: 32K, L1 D cache: 32K
(XEN) CPU: L2 cache: 3072K
(XEN) CPU: Physical Processor ID: 0
(XEN) CPU: Processor Core ID: 1
(XEN) CMCI: CPU1 has no CMCI support
(XEN) CPU1: Thermal monitoring enabled (TM2)
(XEN) CPU1: Intel(R) Core(TM)2 Duo CPU     P8400  @ 2.26GHz stepping 06
(XEN) microcode: CPU1 updated from revision 0x60c to 0x60f, date = 2010-09-29 
[   82.310025] ACPI: Low-level resume complete
[   82.310025] PM: Restoring platform NVS memory
[   82.310025] Enabling non-boot CPUs ...
[   82.310025] installing Xen timer for CPU 1
[   82.310025] cpu 1 spinlock event irq 279
(XEN) Assertion '!cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus)' failed at sched_credit.c:477
(XEN) ----[ Xen-4.2.1-pre  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c48011a35a>] _csched_cpu_pick+0x135/0x552
(XEN) RFLAGS: 0000000000010002   CONTEXT: hypervisor
(XEN) rax: 0000000000000001   rbx: 0000000000000004   rcx: 0000000000000004
(XEN) rdx: 000000000000000f   rsi: 0000000000000004   rdi: 0000000000000000
(XEN) rbp: ffff8301355d7dd8   rsp: ffff8301355d7d08   r8:  0000000000000000
(XEN) r9:  000000000000003e   r10: ffff82c480231700   r11: 0000000000000246
(XEN) r12: ffff82c480261b20   r13: 0000000000000001   r14: ffff82c480301a60
(XEN) r15: ffff83013a542068   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 0000000131a05000   cr2: 0000000000000000
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff8301355d7d08:
(XEN)    0100000131a05000 ffff8301355d7d40 0000000000000082 0000000000000002
(XEN)    ffff8300bd503000 0000000000000001 0000000000000297 ffff8301355d7d58
(XEN)    ffff82c480125499 ffff830138216000 ffff8301355d7d98 5400000000000002
(XEN)    0000000000000286 ffff8301355d7d88 ffff82c480125499 ffff830138216000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    ffff830134ca6a50 ffff83013a542068 ffff83013a542068 0000000000000001
(XEN)    ffff82c480301a60 ffff83013a542068 ffff8301355d7de8 ffff82c48011a785
(XEN)    ffff8301355d7e58 ffff82c480123519 ffff82c480301a60 ffff82c480301a60
(XEN)    ffff82c480301a60 ffff8300bd503000 0000000000503060 0000000000000246
(XEN)    ffff82c480127c31 ffff8300bd503000 ffff82c480301a60 ffff82c4802ebd40
(XEN)    ffff83013a542068 ffff88003fc8e820 ffff8301355d7e88 ffff82c4801237d3
(XEN)    fffffffffffffffe ffff8301355ca000 ffff8300bd503000 0000000000000000
(XEN)    ffff8301355d7ef8 ffff82c480106335 ffff8301355d7f18 ffffffff810030e1
(XEN)    ffff8300bd503000 0000000000000000 ffff8301355d7f08 ffff82c480185390
(XEN)    ffffffff81aafd32 ffff8300bd503000 0000000000000001 0000000000000000
(XEN)    0000000000000000 ffff88003fc8e820 00007cfecaa280c7 ffff82c480227348
(XEN)    ffffffff8100130a 0000000000000018 ffff88003fc8e820 0000000000000000
(XEN)    0000000000000000 0000000000000001 ffff88003976fda0 ffff88003fc8bdc0
(XEN)    0000000000000246 ffff88003976fe60 00000000ffffffff 0000000000000000
(XEN)    0000000000000018 ffffffff8100130a 0000000000000000 0000000000000001
(XEN) Xen call trace:
(XEN)    [<ffff82c48011a35a>] _csched_cpu_pick+0x135/0x552
(XEN)    [<ffff82c48011a785>] csched_cpu_pick+0xe/0x10
(XEN)    [<ffff82c480123519>] vcpu_migrate+0x19f/0x346
(XEN)    [<ffff82c4801237d3>] vcpu_force_reschedule+0xa4/0xb6
(XEN)    [<ffff82c480106335>] do_vcpu_op+0x2c9/0x452
(XEN)    [<ffff82c480227348>] syscall_enter+0xc8/0x122
(XEN)    
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) Assertion '!cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus)' failed at sched_credit.c:477
(XEN) ****************************************
(XEN) 
(XEN) Reboot in five seconds...


Anyway, this might be connected to cpu_disable_scheduler() not
having a counterpart to restore the affinity it broke for pinned
domains (for non-pinned ones I believe this behavior is intentional,
albeit not ideal).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.