[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Is: xen-netfront on 3.2.x crashes on TCP traffic. Was Re: kernel panic on Xen
On Sun, Nov 25, 2012 at 10:40:10PM -0500, David Xu wrote: > Hi all, > > When I run the iperf benchmark to measure the TCP throughput between a > physical machine and a VM, the TCP server which is a Xen VM crashed. Do you > know what's the problem of this bug? Thanks. No idea. Is this easy to reproduce? Do you see it only with 3.2.x kernels? > > [ 100.973027] BUG: unable to handle kernel NULL pointer dereference at > 0000000000000008 > [ 100.973040] IP: [<ffffffff81455f16>] xennet_alloc_rx_buffers+0x166/0x350 > [ 100.973050] PGD 1cc98067 PUD 1d74c067 PMD 0 > [ 100.973051] Oops: 0002 [#1] SMP > [ 100.973051] CPU 1 > [ 100.973051] Modules linked in: > [ 100.973051] > [ 100.973051] Pid: 9, comm: ksoftirqd/1 Not tainted 3.2.23 #131 > [ 100.973051] RIP: e030:[<ffffffff81455f16>] [<ffffffff81455f16>] > xennet_alloc_rx_buffers+0x166/0x350 > [ 100.973051] RSP: e02b:ffff88001e8f1c10 EFLAGS: 00010206 > [ 100.973051] RAX: 0000000000000000 RBX: ffff88001da98000 RCX: > 00000000000012b0 > [ 100.973051] RDX: 0000000000000000 RSI: 0000000000000000 RDI: > 0000000000000256 > [ 100.973051] RBP: ffff88001e8f1c60 R08: ffffc90000000000 R09: > 0000000000017a41 > [ 100.973051] R10: 0000000000000002 R11: 0000000000017298 R12: > ffff880019a7b700 > [ 100.973051] R13: 0000000000000256 R14: 0000000000012092 R15: > 0000000000000092 > [ 100.973051] FS: 00007f7ace91f700(0000) GS:ffff88001fd00000(0000) > knlGS:0000000000000000 > [ 100.973051] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b > [ 100.973051] CR2: 0000000000000008 CR3: 000000001da64000 CR4: > 0000000000002660 > [ 100.973051] DR0: 0000000000000000 DR1: 0000000000000000 DR2: > 0000000000000000 > [ 100.973051] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: > 0000000000000400 > [ 100.973051] Process ksoftirqd/1 (pid: 9, threadinfo ffff88001e8f0000, > task ffff88001e8d7000) > [ 100.973051] Stack: > [ 100.973051] 0000000000000091 ffff88001da99c80 ffff88001da99400 > 0000000100012091 > [ 100.973051] ffff88001e8f1c60 000000000000002d ffff880019a8ac4e > ffff88001fd1a590 > [ 100.973051] 0000160000000000 ffff880000000000 ffff88001e8f1db0 > ffffffff8145699a > [ 100.973051] Call Trace: > [ 100.973051] [<ffffffff8145699a>] xennet_poll+0x7ca/0xe80 > [ 100.973051] [<ffffffff814e3e51>] net_rx_action+0x151/0x2b0 > [ 100.973051] [<ffffffff8106090d>] __do_softirq+0xbd/0x250 > [ 100.973051] [<ffffffff81060b67>] run_ksoftirqd+0xc7/0x170 > [ 100.973051] [<ffffffff81060aa0>] ? __do_softirq+0x250/0x250 > [ 100.973051] [<ffffffff8107b0ac>] kthread+0x8c/0xa0 > [ 100.973051] [<ffffffff8167ca04>] kernel_thread_helper+0x4/0x10 > [ 100.973051] [<ffffffff81672d21>] ? retint_restore_args+0x13/0x13 > [ 100.973051] [<ffffffff8167ca00>] ? gs_change+0x13/0x13 > [ 100.973051] Code: 0f 84 19 01 00 00 83 ab 10 14 00 00 01 45 0f b6 fe 49 > 8b 14 24 49 8b 44 24 08 49 c7 04 24 00 00 00 00 49 c7 44 24 08 00 00 00 00 > <48> 89 42 08 48 89 10 41 0f b6 d7 49 89 5c 24 20 48 8d 82 b8 01 > [ 100.973051] RIP [<ffffffff81455f16>] xennet_alloc_rx_buffers+0x166/0x350 > [ 100.973051] RSP <ffff88001e8f1c10> > [ 100.973051] CR2: 0000000000000008 > [ 100.973259] ---[ end trace b0530821c3527d70 ]--- > [ 100.973263] Kernel panic - not syncing: Fatal exception in interrupt > [ 100.973267] Pid: 9, comm: ksoftirqd/1 Tainted: G D 3.2.23 #131 > [ 100.973270] Call Trace: > [ 100.973273] [<ffffffff816674ae>] panic+0x91/0x1a2 > [ 100.973278] [<ffffffff8100adb2>] ? check_events+0x12/0x20 > [ 100.973282] [<ffffffff81673b0a>] oops_end+0xea/0xf0 > [ 100.973286] [<ffffffff81666e6b>] no_context+0x214/0x223 > [ 100.973291] [<ffffffff8113cf94>] ? kmem_cache_free+0x104/0x110 > [ 100.973295] [<ffffffff8166704b>] __bad_area_nosemaphore+0x1d1/0x1f0 > [ 100.973299] [<ffffffff8166707d>] bad_area_nosemaphore+0x13/0x15 > [ 100.973304] [<ffffffff816763fb>] do_page_fault+0x35b/0x4f0 > [ 100.973308] [<ffffffff814d6044>] ? __netdev_alloc_skb+0x24/0x50 > [ 100.973313] [<ffffffff8129f75a>] ? trace_hardirqs_off_thunk+0x3a/0x6c > [ 100.973318] [<ffffffff81672fa5>] page_fault+0x25/0x30 > [ 100.973322] [<ffffffff81455f16>] ? xennet_alloc_rx_buffers+0x166/0x350 > [ 100.973326] [<ffffffff8145699a>] xennet_poll+0x7ca/0xe80 > [ 100.973330] [<ffffffff814e3e51>] net_rx_action+0x151/0x2b0 > [ 100.973334] [<ffffffff8106090d>] __do_softirq+0xbd/0x250 > [ 100.973338] [<ffffffff81060b67>] run_ksoftirqd+0xc7/0x170 > [ 100.973342] [<ffffffff81060aa0>] ? __do_softirq+0x250/0x250 > [ 100.973346] [<ffffffff8107b0ac>] kthread+0x8c/0xa0 > [ 100.973350] [<ffffffff8167ca04>] kernel_thread_helper+0x4/0x10 > [ 100.973354] [<ffffffff81672d21>] ? retint_restore_args+0x13/0x13 > [ 100.973358] [<ffffffff8167ca00>] ? gs_change+0x13/0x13 > > Regards, > Cong > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |