[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen dom0 network I/O scalability



On Thu, 2011-05-12 at 21:10 +0100, Kaushik Kumar Ram wrote:
> On May 12, 2011, at 3:21 AM, Ian Campbell wrote:
> 
> > A long time ago to backend->frontend path (guest receive) operated using
> > a page flipping mode. At some point a copying mode was added to this
> > path which became the default some time in 2006. You would have to go
> > out of your way to find a guest which used flipping mode these days. I
> > think this is the copying you are referring too, it's so long ago that
> > there was a distinction on this path that I'd forgotten all about it
> > until now.
> 
> I was not referring to page flipping at all. I was only talking about the 
> copies
> in the receive path. 

Sure, I was just explaining the background as to why everyone got
confused when you mentioned copying mode since most peoples minds went
to the other copying vs. !copying distinction.

> > As far as I can tell you are running with the zero-copy path. Only
> > mainline 2.6.39+ has anything different.
> 
> Again, I was only referring to the receive path! I assumed you were talking 
> about
> re-introducing zero-copy in the receive path (aka page flipping). 
> To be clear: 
> - xen.git#xen/stable-2.6.32.x uses copying in the RX path and 
> mapping (zero-copy) in the RX path. 
                             ^TX
> - Copying is used in both RX and TX path in 2.6.39+ for upstreaming.

Correct (as amended).

For completeness linux-2.6.18-xen.hg has both copying and flipping RX
paths but defaults to copying (you'd be hard pressed to find a guest
which uses flipping mode). The TX path is mapping (zero-copy).

In xen.git#xen/stable-2.6.32.x the code to support flipping RX path has
been completely removed.

> > I think you need to go into detail about your test setup so we can all
> > get on the same page and stop confusing ourselves by guessing which
> > modes netback has available and is running in. Please can you describe
> > precisely which kernels you are running (tree URL and changeset as well
> > as the .config you are using). Please also describe your guest
> > configuration (kernels, cfg file, distro etc) and benchmark methodology
> > (e.g. netperf options).
> > 
> > I'd also be interesting in seeing the actual numbers you are seeing,
> > alongside specifics of the test scenario which produced them.
> > 
> > I'm especially interesting in the details of the experiment(s) where you
> > saw a 50% drop in throughput.
> 
> I agree. I plan to run the experiments again next week. I will get back to 
> you 
> with all the details. 

Thanks.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.