[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] network packet storm e1000_clean_tx_irq: Detected Tx Unit Hang
I've got a multi-bridge setup here; xenbr 0 through 5. In fact, we have several identical setups. Well, one of them gives us a packet storm that looks a lot like what you get when you bridge two switches twice without STP. The bridges look okay, though; also, I notice the TX packet count is massively high on peth2, but it is reasonable on both eth2 and vif0.2 (and on eth0 within the DomU attached to xenbr2) I get the following errors in dmesg when the switch port for peth2 is enabled (shortly thereafter it eats the network) e1000: peth2: e1000_watchdog_task: NIC Link is Up 100 Mbps Full Duplex xenbr2: port 2(peth2) entering learning state xenbr2: topology change detected, propagating xenbr2: port 2(peth2) entering forwarding state e1000: peth2: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <a0> TDT <a2> next_to_use <a2> next_to_clean <a0> buffer_info[next_to_clean] time_stamp <1007bc816> next_to_watch <a0> jiffies <1007bc93e> next_to_watch.status <0> e1000: peth2: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <a0> TDT <a2> next_to_use <a2> next_to_clean <a0> buffer_info[next_to_clean] time_stamp <1007bc816> next_to_watch <a0> jiffies <1007bca06> next_to_watch.status <0>Ideas? do you think the 'tx unit hang' is causing the network spew? or that the network spew is causing the 'tx unit hang' ? I was guessing the former simply because the network spew is only seen on peth2; it's not seen on any other interface. _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |