[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] Lots of udp (multicast) packet loss in domU
Hello again James, > These days, each DomU network interface should be making Dom0 aware of > what multicast traffic it should be receiving, so unless your domU > kernels are old that shouldn't be your problem, but maybe someone else > can confirm that multicast is definitely in place? Hmmm, how would I verify that? As far as I can see the dom0 is in constant promiscuous mode so that it can pass all bridged traffic. This doesn't really matter though, I actually do need all the traffic I am receiving. The problem is that the load is exorbitant between dom0 and domU. I mean, with 600 Mbps of network IO, dom0 consumes an entire 5310 core (2.33 GHz penryn). Whereas if I pin that interface into the domU via PCI-Passthrough, we only get a 5% cpu load to ingest that traffic. I don't know if its important or not, but in the dom0, if I use "top" the CPU is 99% idle. But if a run "xm top" this is where I see the 100% utilization on dom0. > I think that bridging is definitely much lighter on CPU usage than > routing, so I don't think that going down that path would help you in > any way. I agree in principle, I just didn't know what the Xen internals looked like so thought I would ask. > What size are your packets? If they are VoIP then you are probably stuck > with tiny packets, but if they are something else that can use big > packets then you may be able to improve things by increasing your MTU up > to 9000, although that is a road less travelled and may not be > supported. These are video packets, each packet has a 1316 byte UDP payload. Changing the MTU upstream is not possible for me. > But maybe Xen isn't the right solution to this problem? No, I still think it is, we are having great success with Xen in our appliction, except for this passing of traffic. Until we find the answer, we'll just have to use PCI-Passthrough and dedicate some NICs to the domU that needs the high-bandwidth. Thanks again, I look forward to any more insights or ideas from the community. --Mike _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |