[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Dom0 physical networking/swiotlb/something issue in 3.7-rc1

On Fri, 2012-10-12 at 12:59 +0100, Konrad Rzeszutek Wilk wrote:
> > I suspect that the issue is that the compound pages allocated in this
> > way are not backed by contiguous mfns and so things fall apart when the
> > driver tries to do DMA.
> So this should also be easily reproduced on barmetal with 'iommu=soft' then.

Does that cause the frames backing the page to become non-contiguous in
"machine" memory?

> > However I don't understand why the swiotlb is not fixing this up
> > successfully? The tg3 driver seems to use pci_map_single on this data.
> > Any thoughts? Perhaps the swiotlb (either generically or in the Xen
> > backend) doesn't correctly handle compound pages?
> The assumption is that it is just a page. I am surprsed that the other
> IOMMUs aren't hitting this as well - ah, that is b/c they do handle
> a virtual address of more than one PAGE_SIZE..
> > 
> > Ideally we would also fix this at the point of allocation to avoid the
> > bouncing -- I suppose that would involve using the DMA API in
> > netdev_alloc_frag?
> Using pci_alloc_coherent would do it.. but
> > 
> > We have a, sort of, similar situation in the block layer which is solved
> > via BIOVEC_PHYS_MERGEABLE. Sadly I don't think anything similar can
> > easily be retrofitted to the net drivers without changing every single
> > one.
> .. I think the right way would be to fix the SWIOTLB. And since I am now
> officially the maintainer of said subsystem you have come to the right
> person!
> What is the easiest way of reproducing this? Just doing large amount
> of netperf/netserver traffic both ways?

I'm seeing ~75% loss from ping plus an scp from the dom0 to another host
was measuring hundreds of kb/s instead of a few mb/s.

Fixing this only at the swiotlb layer is going to cause lots of bouncing


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.