[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] NAPI rescheduling and the delay caused by it
Hi, I tried that, it didn't help in my case. I think xenvif_notify_tx_completion is only a shortcut to wake the queue earlier. Otherwise we should wait for the interrupt from the guest to arrive (see rx_interrupt), however we know it can be done. Or does this call from the kernel thread makes things worse than a later call from the interrupt context? I found another suspect however: my grant mapping patches do the unmapping from the NAPI instance where otherwise we receive the packets from the guest. But this means we call napi_schedule from the zerocopy callback, which can be run by anyone who free up that skb, including an another VIF's RX thread (which actually does the transmit TO the guest). I guess that might be bad. Zoli -----Original Message----- From: netdev-owner@xxxxxxxxxxxxxxx [mailto:netdev-owner@xxxxxxxxxxxxxxx] On Behalf Of Eric Dumazet Sent: 04 December 2013 23:32 To: Zoltan Kiss Cc: netdev@xxxxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxxx; Malcolm Crossley; Jonathan Davies; Paul Durrant; Wei Liu; Ian Campbell Subject: Re: NAPI rescheduling and the delay caused by it On Wed, 2013-12-04 at 21:23 +0000, Zoltan Kiss wrote: > I see netif_rx_ni makes sure the softirq is executed, but I'm not sure > I get how is it related to wake_queue. Can you explain a bit more? > Calling netif_wake_queue() from process context makes no guarantee the TX softirq is processed in the following cycles. This interface is meant to be used from softirq context. Try to enclose it in : void xenvif_notify_tx_completion(struct xenvif *vif) { if (netif_queue_stopped(vif->dev) && xenvif_rx_schedulable(vif)) { local_bh_disable(); netif_wake_queue(vif->dev); local_bh_enable(); } } _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |