[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen crash after S3 suspend - Xen 4.13 and newer



On Tue, Sep 29, 2020 at 05:27:48PM +0200, Jürgen Groß wrote:
> On 29.09.20 17:16, Marek Marczykowski-Górecki wrote:
> > On Tue, Sep 29, 2020 at 05:07:11PM +0200, Jürgen Groß wrote:
> > > On 29.09.20 16:27, Marek Marczykowski-Górecki wrote:
> > > > On Mon, Mar 23, 2020 at 01:09:49AM +0100, Marek Marczykowski-Górecki 
> > > > wrote:
> > > > > On Thu, Mar 19, 2020 at 01:28:10AM +0100, Dario Faggioli wrote:
> > > > > > [Adding Juergen]
> > > > > > 
> > > > > > On Wed, 2020-03-18 at 23:10 +0100, Marek Marczykowski-Górecki wrote:
> > > > > > > On Wed, Mar 18, 2020 at 02:50:52PM +0000, Andrew Cooper wrote:
> > > > > > > > On 18/03/2020 14:16, Marek Marczykowski-Górecki wrote:
> > > > > > > > > Hi,
> > > > > > > > > 
> > > > > > > > > In my test setup (inside KVM with nested virt enabled), I 
> > > > > > > > > rather
> > > > > > > > > frequently get Xen crash on resume from S3. Full message 
> > > > > > > > > below.
> > > > > > > > > 
> > > > > > > > > This is Xen 4.13.0, with some patches, including "sched: fix
> > > > > > > > > resuming
> > > > > > > > > from S3 with smt=0".
> > > > > > > > > 
> > > > > > > > > Contrary to the previous issue, this one does not happen 
> > > > > > > > > always -
> > > > > > > > > I
> > > > > > > > > would say in about 40% cases on this setup, but very rarely on
> > > > > > > > > physical
> > > > > > > > > setup.
> > > > > > > > > 
> > > > > > > > > This is _without_ core scheduling enabled, and also with 
> > > > > > > > > smt=off.
> > > > > > > > > 
> > > > > > > > > Do you think it would be any different on xen-unstable? I cat
> > > > > > > > > try, but
> > > > > > > > > it isn't trivial in this setup, so I'd ask first.
> > > > > > > > > 
> > > > > > Well, Juergen has fixed quite a few issues.
> > > > > > 
> > > > > > Most of them where triggering with core-scheduling enabled, and I 
> > > > > > don't
> > > > > > recall any of them which looked similar or related to this.
> > > > > > 
> > > > > > Still, it's possible that the same issue causes different symptoms, 
> > > > > > and
> > > > > > hence that maybe one of the patches would fix this too.
> > > > > 
> > > > > I've tested on master (d094e95fb7c), and reproduced exactly the same 
> > > > > crash
> > > > > (pasted below for the completeness).
> > > > > But there is more: additionally, in most (all?) cases after resume 
> > > > > I've got
> > > > > soft lockup in Linux dom0 in smp_call_function_single() - see below. 
> > > > > It
> > > > > didn't happened before and the only change was Xen 4.13 -> master.
> > > > > 
> > > > > Xen crash:
> > > > > 
> > > > > (XEN) Assertion 'c2rqd(sched_unit_master(unit)) == svc->rqd' failed 
> > > > > at credit2.c:2133
> > > > 
> > > > Juergen, any idea about this one? This is also happening on the current
> > > > stable-4.14 (28855ebcdbfa).
> > > > 
> > > 
> > > Oh, sorry I didn't come back to this issue.
> > > 
> > > I suspect this is related to stop_machine_run() being called during
> > > suspend(), as I'm seeing very sporadic issues when offlining and then
> > > onlining cpus with core scheduling being active (it seems as if the
> > > dom0 vcpu doing the cpu online activity sometimes is using an old
> > > vcpu state).
> > 
> > Note this is default Xen 4.14 start, so core scheduling is _not_ active:
> 
> The similarity in the two failure cases is that multiple cpus are
> affected by the operations during stop_machine_run().
> 
> > 
> >      (XEN) Brought up 2 CPUs
> >      (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> >      (XEN) Adding cpu 0 to runqueue 0
> >      (XEN)  First cpu on runqueue, activating
> >      (XEN) Adding cpu 1 to runqueue 1
> >      (XEN)  First cpu on runqueue, activating
> > 
> > > I wasn't able to catch the real problem despite of having tried lots
> > > of approaches using debug patches.
> > > 
> > > Recently I suspected the whole problem could be somehow related to
> > > RCU handling, as stop_machine_run() is relying on tasklets which are
> > > executing in idle context, and RCU handling is done in idle context,
> > > too. So there might be some kind of use after free scenario in case
> > > some memory is freed via RCU despite it still being used by a tasklet.
> > 
> > That sounds plausible, even though I don't really know this area of Xen.
> > 
> > > I "just" need to find some time to verify this suspicion. Any help doing
> > > this would be appreciated. :-)
> > 
> > I do have a setup where I can easily-ish reproduce the issue. If there
> > is some debug patch you'd like me to try, I can do that.
> 
> Thanks. I might come back to that offer as you are seeing a crash which
> will be much easier to analyze. Catching my error case is much harder as
> it surfaces some time after the real problem in a non destructive way
> (usually I'm seeing a failure to load a library in the program which
> just did its job via exactly the library claiming not being loadable).

Hi,

I'm resurrecting this thread as it was recently mentioned elsewhere. I
can still reproduce the issue on the recent staging branch (9dc687f155).

It fails after the first resume (not always, but frequent enough to
debug it). At least one guest needs to be running - with just (PV) dom0
the crash doesn't happen (at least for the ~8 times in a row I tried).
If the first resume works, the second (almost?) always will fail but
with a different symptoms - dom0 kernel lockups (at least some of its
vcpus). I haven't debugged this one yet at all.

Any help will be appreciated, I can apply some debug patches, change
configuration etc.

In the meantime I tried to collect more info with the patch below, but I
can't make much of it. Here is what I've got:

(XEN) c2r(sched_unit_master(unit)) = 1, c2rqd(sched_unit_master(unit)) = 
ffff8301ba6d0230, sched_unit_master(unit) = 1, svc->rqd->id = 0, svc->rqd = 
ffff8301ba6d0050, 
(XEN) Assertion 'c2rqd(sched_unit_master(unit)) == svc->rqd' failed at 
credit2.c:2294
(XEN) ----[ Xen-4.15-unstable  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d040246642>] credit2.c#csched2_unit_wake+0x205/0x207
(XEN) RFLAGS: 0000000000010083   CONTEXT: hypervisor
(XEN) rax: ffff8301ba6d0230   rbx: ffff8301ba6b7b50   rcx: 0000000000000001
(XEN) rdx: 000000317a159000   rsi: 000000000000000a   rdi: ffff82d0404686b8
(XEN) rbp: ffff8300be877df0   rsp: ffff8300be877dd0   r8:  0000000000000000
(XEN) r9:  0000000000000004   r10: 0000000000000001   r11: 0000000000000001
(XEN) r12: ffff8301ba6b7be0   r13: ffff82d040453d00   r14: 0000000000000001
(XEN) r15: 0000013446313336   cr0: 000000008005003b   cr4: 00000000000006e0
(XEN) cr3: 00000000be866000   cr2: 0000000000000000
(XEN) fsb: 00000000000b8000   gsb: 0000000000000000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen code around <ffff82d040246642> 
(credit2.c#csched2_unit_wake+0x205/0x207):
(XEN)  f3 00 00 e9 7a ff ff ff <0f> 0b 55 48 89 e5 41 57 41 56 41 55 41 54 53 48
(XEN) Xen stack trace from rsp=ffff8300be877dd0:
(XEN)    ffff8301ba6a2000 ffff8301ba6b7b50 ffff8301ba6b7b50 ffff8301ba6d0230
(XEN)    ffff8300be877e40 ffff82d04024f968 0000000000000202 ffff8301ba6d0230
(XEN)    ffff82d0405a05c0 ffff83010a6f1fd0 ffff8301ba6a2000 0000000000000000
(XEN)    0000000000000000 ffff82d0405a05c0 ffff8300be877e50 ffff82d04020548a
(XEN)    ffff8300be877e70 ffff82d04020552b ffff8301ba6a21b8 ffff82d0405801b0
(XEN)    ffff8300be877e88 ffff82d04022ba4e ffff82d0405801a0 ffff8300be877eb8
(XEN)    ffff82d04022bd2f 0000000000000000 0000000000007fff ffff82d040586f00
(XEN)    ffff82d0405801b0 ffff8300be877ef0 ffff82d0402f0412 ffff82d0402f039e
(XEN)    ffff8301ba6a2000 ffff8301ba708000 0000000000000000 ffff8301ba6bb000
(XEN)    ffff8300be877e18 ffff8881072a76e0 ffffc90003f3ff08 0000000000000003
(XEN)    0000000000000000 0000000000002401 ffffffff827594c8 0000000000000246
(XEN)    0000000000000003 0000000000002401 0000000000002401 0000000000000000
(XEN)    ffffffff810010ea 0000000000002401 0000000000000010 deadbeefdeadf00d
(XEN)    0000010000000000 ffffffff810010ea 000000000000e033 0000000000000246
(XEN)    ffffc90003f3fcb8 000000000000e02b 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000e01000000000 ffff8301ba6fb000
(XEN)    0000000000000000 00000000000006e0 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d040246642>] R credit2.c#csched2_unit_wake+0x205/0x207
(XEN)    [<ffff82d04024f968>] F vcpu_wake+0x105/0x52c
(XEN)    [<ffff82d04020548a>] F vcpu_unpause+0x13/0x15
(XEN)    [<ffff82d04020552b>] F 
domain.c#continue_hypercall_tasklet_handler+0x9f/0xb9
(XEN)    [<ffff82d04022ba4e>] F tasklet.c#do_tasklet_work+0x76/0xa9
(XEN)    [<ffff82d04022bd2f>] F do_tasklet+0x58/0x8a
(XEN)    [<ffff82d0402f0412>] F domain.c#idle_loop+0x74/0xdd
(XEN) 
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'c2rqd(sched_unit_master(unit)) == svc->rqd' failed at 
credit2.c:2294

And the patch:

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index eb5e5a78c5e7..475f0acf2dc5 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -2268,10 +2268,28 @@ csched2_unit_wake(const struct scheduler *ops, struct 
sched_unit *unit)
     }
 
     /* Add into the new runqueue if necessary */
-    if ( svc->rqd == NULL )
+    if ( svc->rqd == NULL ) {
+        printk(XENLOG_DEBUG "assigning cpu %d to runqueue\n", cpu);
         runq_assign(unit);
-    else
+        printk(XENLOG_DEBUG "assigned cpu %d to runqueue %d\n",
+                cpu,
+                svc->rqd ? svc->rqd->id : -1);
+    } else {
+        if (c2rqd(sched_unit_master(unit)) != svc->rqd) {
+            printk(XENLOG_DEBUG "c2r(sched_unit_master(unit)) = %d, "
+                                "c2rqd(sched_unit_master(unit)) = %p, "
+                                "sched_unit_master(unit) = %d, "
+                                "svc->rqd->id = %d, "
+                                "svc->rqd = %p, "
+                                "\n",
+                c2r(sched_unit_master(unit)),
+                c2rqd(sched_unit_master(unit)),
+                sched_unit_master(unit),
+                svc->rqd->id,
+                svc->rqd);
+        }
         ASSERT(c2rqd(sched_unit_master(unit)) == svc->rqd );
+    }
 
     now = NOW();
 



-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.