[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] rcu_sched self-detect stall when disable vif device



On 27/01/15 16:45, Wei Liu wrote:
> On Tue, Jan 27, 2015 at 04:03:52PM +0000, Julien Grall wrote:
>> Hi,
>>
>> While I'm working on support for 64K page in netfront, I got
>> an rcu_sced self-detect message. It happens when netback is
>> disabling the vif device due to an error.
>>
>> I'm using Linux 3.19-rc5 on seattle (ARM64). Any idea why
>> the processor is stucked in xenvif_rx_queue_purge?
>>
> 
> When you try to release a SKB, core network driver need to enter some
> RCU cirital region to clean up. dst_release for one, calls call_rcu.

This is RCU detecting a soft-lockup.  You're either spinning on the
spinlock or the guest rx queue is corrupt and cannot be drained.

David

>> Here the log:
>>
>> vif vif-20-0 vif20.0: txreq.offset: 3410, size: 342, end: 1382
>> vif vif-20-0 vif20.0: fatal error; disabling device
>> INFO: rcu_sched self-detected stall on CPU { 1}  (t=2101 jiffies g=37266 
>> c=37265 q=2649)
>> Task dump for CPU 1:
>> vif20.0-q0-gues R  running task        0 12617      2 0x00000002
>> Call trace:
>> [<ffff800000089038>] dump_backtrace+0x0/0x124
>> [<ffff80000008916c>] show_stack+0x10/0x1c
>> [<ffff8000000bb1b4>] sched_show_task+0x98/0xf8
>> [<ffff8000000bdca0>] dump_cpu_task+0x3c/0x4c
>> [<ffff8000000d9fd4>] rcu_dump_cpu_stacks+0xa4/0xf8
>> [<ffff8000000dd1a8>] rcu_check_callbacks+0x478/0x748
>> [<ffff8000000e0b20>] update_process_times+0x38/0x6c
>> [<ffff8000000eedd0>] tick_sched_timer+0x64/0x1b4
>> [<ffff8000000e10a8>] __run_hrtimer+0x88/0x234
>> [<ffff8000000e19c0>] hrtimer_interrupt+0x108/0x2b0
>> [<ffff8000005934c4>] arch_timer_handler_virt+0x28/0x38
>> [<ffff8000000d5e88>] handle_percpu_devid_irq+0x88/0x11c
>> [<ffff8000000d1ec0>] generic_handle_irq+0x30/0x4c
>> [<ffff8000000d21dc>] __handle_domain_irq+0x5c/0xac
>> [<ffff8000000823b8>] gic_handle_irq+0x30/0x80
>> Exception stack(0xffff800013a07c20 to 0xffff800013a07d40)
>> 7c20: 058ed000 ffff0000 058ed9d8 ffff0000 13a07d60 ffff8000 0053c418 ffff8000
>> 7c40: 00000000 00000000 0000ecf2 00000000 058ed9ec ffff0000 00000000 00000000
>> 7c60: 00000001 00000000 00000000 00000000 00001800 00000000 feacbe9d 0000060d
>> 7c80: 1ce5d6e0 ffff8000 13a07a90 ffff8000 00000400 00000000 ffffffff ffffffff
>> 7ca0: 0013d000 00000000 00000090 00000000 00000040 00000000 9a272028 0000ffff
>> 7cc0: 00099e64 ffff8000 00411010 00000000 df8fbb70 0000ffff 058ed000 ffff0000
>> 7ce0: 058ed9d8 ffff0000 058ed000 ffff0000 058ed988 ffff0000 00956000 ffff8000
>> 7d00: 19204840 ffff8000 000c75f8 ffff8000 13a04000 ffff8000 008a0598 ffff8000
>> 7d20: 00000000 00000000 13a07d60 ffff8000 0053c3bc ffff8000 13a07d60 ffff8000
>> [<ffff8000000854e4>] el1_irq+0x64/0xc0
>> [<ffff80000053c448>] xenvif_rx_queue_purge+0x1c/0x30
>> [<ffff80000053ea34>] xenvif_kthread_guest_rx+0x210/0x29c
>> [<ffff8000000b1060>] kthread+0xd8/0xf0
>>
>>
>> Regards,
>>
>> -- 
>> Julien Grall
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.