[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xen_evtchn_do_upcall
Hi Lan, I am trying to improve the TCP/UDP performance in VM. Due the the vCPU scheduling on a pCPU sharing platform, the TCP receiving delay will be significant which hurt the TCP/UDP throughput. So I want to offload the softIRQ context to another Idle pCPU which I call fast-tick CPU. The packet receiving process is like this: IRQ routine can continue picking packet from ring buffer and put it to TCP receive buffer ( receive_queue, prequeue, backlog_queue ) in kernel no matter whether the user process on another CPU shared with other VMs is running or not. Once the vCPU holding the user process gets scheduled, user process will fetch all packets from receive buffer in kernel, which can improve the throughput. This works well for UDP unfortunately does not work for TCP currently.
I found those xen_evtchn_do_upcall routines existing when irq context try to get the spinlock on the socket (Of course, they may happen in other paths). If this spinlock is held by process context, irq context has to spin on it and can not put any packet to receive buffer in time. So I doubt these
xen_evtchn_do_upcall routine are due to the synchronization between process context and irq context. Since I run process context and irq context on 2 different vCPU, when they try to get the spinlock on the same socket there will be interrupts between 2 vCPU which are implemented by event in Xen.
If there is any error in my description, please correct me. Thanks. Regards, Cong
2012/10/24 Ian Campbell <Ian.Campbell@xxxxxxxxxx>
_______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |