[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] 3.16-rc3: PV guest INFO: possible recursive locking detected netback_changed+0xecb/0x1100



Hi,

In one of my PV guest i got the lockdep warning below on boot.

--
Sander

[    0.839799] =============================================
[    0.839803] [ INFO: possible recursive locking detected ]
[    0.839808] 3.16.0-rc3-20140630+ #1 Not tainted
[    0.839812] ---------------------------------------------
[    0.839817] xenwatch/24 is trying to acquire lock:
[    0.839821]  (&(&queue->rx_lock)->rlock){+.....}, at: [<ffffffff8173c72b>] 
netback_changed+0xecb/0x1100
[    0.839835]
[    0.839835] but task is already holding lock:
[    0.839841]  (&(&queue->rx_lock)->rlock){+.....}, at: [<ffffffff8173c72b>] 
netback_changed+0xecb/0x1100
[    0.839851]
[    0.839851] other info that might help us debug this:
[    0.839856]  Possible unsafe locking scenario:
[    0.839856]
[    0.839861]        CPU0
[    0.839864]        ----
[    0.839866]   lock(&(&queue->rx_lock)->rlock);
[    0.839871]   lock(&(&queue->rx_lock)->rlock);
[    0.839876]
[    0.839876]  *** DEADLOCK ***
[    0.839876]
[    0.839881]  May be due to missing lock nesting notation
[    0.839881]
[    0.839886] 3 locks held by xenwatch/24:
[    0.839889]  #0:  (xenwatch_mutex){+.+.+.}, at: [<ffffffff8151726b>] 
xenwatch_thread+0x3b/0x160
[    0.839901]  #1:  (&(&queue->rx_lock)->rlock){+.....}, at: 
[<ffffffff8173c72b>] netback_changed+0xecb/0x1100
[    0.839912]  #2:  (&(&queue->tx_lock)->rlock){......}, at: 
[<ffffffff8173c738>] netback_changed+0xed8/0x1100
[    0.839923]
[    0.839923] stack backtrace:
[    0.839928] CPU: 0 PID: 24 Comm: xenwatch Not tainted 3.16.0-rc3-20140630+ #1
[    0.839934]  ffffffff8292a4f0 ffff88002e713b78 ffffffff81b59387 
ffff88002e7090c0
[    0.839942]  ffffffff8292a4f0 ffff88002e713c58 ffffffff8110284e 
ffff88002e7090c0
[    0.839950]  0000000000023700 000000000000014f 0000000000051990 
ffffffff8306f7e0
[    0.839959] Call Trace:
[    0.843068]  [<ffffffff81b59387>] dump_stack+0x46/0x58
[    0.843068]  [<ffffffff8110284e>] __lock_acquire+0x84e/0x2220
[    0.843068]  [<ffffffff8110055a>] ? mark_held_locks+0x6a/0x90
[    0.843068]  [<ffffffff81b61895>] ? __mutex_unlock_slowpath+0x1b5/0x270
[    0.843068]  [<ffffffff8173002b>] ? rtl_apply_firmware+0x16b/0x2b0
[    0.843068]  [<ffffffff8110483d>] lock_acquire+0xcd/0x110
[    0.843068]  [<ffffffff8173c72b>] ? netback_changed+0xecb/0x1100
[    0.843068]  [<ffffffff81b61a09>] ? mutex_unlock+0x9/0xb
[    0.843068]  [<ffffffff81b62f1a>] _raw_spin_lock_bh+0x3a/0x50
[    0.843068]  [<ffffffff8173c72b>] ? netback_changed+0xecb/0x1100
[    0.843068]  [<ffffffff8173c72b>] netback_changed+0xecb/0x1100
[    0.843068]  [<ffffffff81518710>] xenbus_otherend_changed+0xb0/0xc0
[    0.843068]  [<ffffffff81517230>] ? xenbus_thread+0x290/0x290
[    0.843068]  [<ffffffff8151a34e>] backend_changed+0xe/0x10
[    0.843068]  [<ffffffff815172d1>] xenwatch_thread+0xa1/0x160
[    0.843068]  [<ffffffff810fadc0>] ? __init_waitqueue_head+0x60/0x60
[    0.843068]  [<ffffffff810dfeef>] kthread+0xdf/0x100
[    0.843068]  [<ffffffff810dfe10>] ? __init_kthread_worker+0x70/0x70
[    0.843068]  [<ffffffff81b63afc>] ret_from_fork+0x7c/0xb0
[    0.843068]  [<ffffffff810dfe10>] ? __init_kthread_worker+0x70/0x70


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.