[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Back trace from rcutree.c after resuming from S3


  • To: Xen Devel Mailing list <xen-devel@xxxxxxxxxxxxx>
  • From: Javier Marcet <jmarcet@xxxxxxxxx>
  • Date: Wed, 26 Sep 2012 22:46:38 +0200
  • Delivery-date: Wed, 26 Sep 2012 20:47:33 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

On Tue, Sep 18, 2012 at 8:57 PM, Javier Marcet <jmarcet@xxxxxxxxx> wrote:

> after solving the issue I had with a DVB tuner I tested again
> suspending and resuming from S3.
>
> I am now running Xen 4.2.0 and have applied patches [1] and [2] (any
> idea why they are still not in mainline?) over a 3.5.3 kernel. I only
> tried once since I upgraded my Xen, but after resuming I got the
> following back trace:
>
> [  142.267299] pcieport 0000:00:1c.5: wake-up capability enabled by ACPI
> [  142.278312] ehci_hcd 0000:00:1d.0: wake-up capability enabled by ACPI
> [  142.289169] ehci_hcd 0000:00:1a.0: wake-up capability enabled by ACPI
> [  142.311116] xhci_hcd 0000:00:14.0: wake-up capability enabled by ACPI
> [  142.322209] PM: noirq suspend of devices complete after 55.650 msecs
> [  142.322778] ACPI: Preparing to enter system sleep state S3
> [  142.723286] PM: Saving platform NVS memory
> [  142.736453] Disabling non-boot CPUs ...
> [  142.851387] ------------[ cut here ]------------
> [  142.851397] WARNING: at
> /home/storage/src/ubuntu-precise/kernel/rcutree.c:1550
> rcu_do_batch.isra.41+0x5f/0x213()
> [  142.851401] Hardware name: To Be Filled By O.E.M.
> [  142.851404] Modules linked in: blktap(O) ip6table_filter ip6_tables
> ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
> xt_state nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle xt_tcpudp
> iptable_filter ip_tables x_tables [last unloaded: cx25840]
> [  142.851466] Pid: 32, comm: migration/7 Tainted: G           O
> 3.5.0-14-i7 #18~precise1
> [  142.851469] Call Trace:
> [  142.851473]  <IRQ>  [<ffffffff8106a9bd>] warn_slowpath_common+0x7e/0x96
> [  142.851488]  [<ffffffff8106a9ea>] warn_slowpath_null+0x15/0x17
> [  142.851496]  [<ffffffff810c6e87>] rcu_do_batch.isra.41+0x5f/0x213
> [  142.851504]  [<ffffffff810c687f>] ?
> check_for_new_grace_period.isra.35+0x38/0x44
> [  142.851512]  [<ffffffff810c7101>] __rcu_process_callbacks+0xc6/0xee
> [  142.851520]  [<ffffffff810c7149>] rcu_process_callbacks+0x20/0x3e
> [  142.851527]  [<ffffffff81070c27>] __do_softirq+0x87/0x113
> [  142.851536]  [<ffffffff812e7b22>] ? __xen_evtchn_do_upcall+0x1a0/0x1dd
> [  142.851546]  [<ffffffff8162f05c>] call_softirq+0x1c/0x30
> [  142.851557]  [<ffffffff810343a3>] do_softirq+0x41/0x7e
> [  142.851566]  [<ffffffff81070e66>] irq_exit+0x3f/0x9a
> [  142.851573]  [<ffffffff812e95c9>] xen_evtchn_do_upcall+0x2c/0x39
> [  142.851580]  [<ffffffff8162f0ae>] xen_do_hypervisor_callback+0x1e/0x30
> [  142.851588]  <EOI>  [<ffffffff8100122a>] ? hypercall_page+0x22a/0x1000
> [  142.851601]  [<ffffffff8100122a>] ? hypercall_page+0x22a/0x1000
> [  142.851609]  [<ffffffff8102eee9>] ? xen_force_evtchn_callback+0xd/0xf
> [  142.851618]  [<ffffffff8102f552>] ? check_events+0x12/0x20
> [  142.851627]  [<ffffffff8102f53f>] ? xen_restore_fl_direct_reloc+0x4/0x4
> [  142.851635]  [<ffffffff810b6ff4>] ? arch_local_irq_restore+0xb/0xd
> [  142.851644]  [<ffffffff810b73f4>] ? stop_machine_cpu_stop+0xc1/0xd3
> [  142.851652]  [<ffffffff810b7333>] ? queue_stop_cpus_work+0xb5/0xb5
> [  142.851660]  [<ffffffff810b7152>] ? cpu_stopper_thread+0xf7/0x187
> [  142.851667]  [<ffffffff8108a927>] ? finish_task_switch+0x82/0xc1
> [  142.851676]  [<ffffffff8162c50b>] ? __schedule+0x428/0x454
> [  142.851685]  [<ffffffff8162cff0>] ? _raw_spin_unlock_irqrestore+0x15/0x18
> [  142.851693]  [<ffffffff810b705b>] ? cpu_stop_signal_done+0x30/0x30
> [  142.851701]  [<ffffffff81082627>] ? kthread+0x86/0x8e
> [  142.851710]  [<ffffffff8162ef64>] ? kernel_thread_helper+0x4/0x10
> [  142.851718]  [<ffffffff8162d338>] ? retint_restore_args+0x5/0x6
> [  142.851726]  [<ffffffff8162ef60>] ? gs_change+0x13/0x13
> [  142.851736] ---[ end trace 3722a99bcc5ae37a ]---
> [  144.257229] ACPI: Low-level resume complete
> [  144.257322] PM: Restoring platform NVS memory
> [  144.257866] Enabling non-boot CPUs ...
> [  144.258042] installing Xen timer for CPU 1
> [  144.258080] cpu 1 spinlock event irq 48
> [  144.322077] CPU1 is up
> [  144.322352] installing Xen timer for CPU 2
> [  144.322379] cpu 2 spinlock event irq 55
> [  144.340819] CPU2 is up
> [  144.340940] installing Xen timer for CPU 3
> [  144.340968] cpu 3 spinlock event irq 62
> [  144.404393] CPU3 is up
> [  144.404522] installing Xen timer for CPU 4
> [  144.404548] cpu 4 spinlock event irq 69
> [  144.468656] CPU4 is up
> [  144.468780] installing Xen timer for CPU 5
> [  144.468807] cpu 5 spinlock event irq 76
> [  144.532834] CPU5 is up
> [  144.532959] installing Xen timer for CPU 6
> [  144.532986] cpu 6 spinlock event irq 83
> [  144.597255] CPU6 is up
> [  144.597481] installing Xen timer for CPU 7
> [  144.597509] cpu 7 spinlock event irq 90
> [  144.661253] CPU7 is up
> [  144.662834] ACPI: Waking up from system sleep state S3
>
> [1] https://patchwork.kernel.org/patch/1120842/
> [2] https://patchwork.kernel.org/patch/1120872/

With Jen's patch still applied on my xen, besides the other two quoted
above on a 3.5.4 kernel, I tried putting offline and back online 4 of
my 8 cores and didn't have a single issue. Resuming from S3, on the
other hand, is completely random. Most of the times I get the above
backtrace, sometimes it works perfectly and a few other times it
doesn't even resume but directly reboots.


-- 
Javier Marcet <jmarcet@xxxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.