[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Load increase after memory upgrade (part2)



> The thing i was wondering about is if my AMD IOMMU is actually doing 
> something for PV guests.
> When booting with iommu=off machine has 8GB mem, dom0 limited to 1024M and 
> just starting one domU with iommu=soft, with pci-passthrough and the USB 
> pci-cards with USB videograbbers attached to it, i would expect to find some 
> bounce buffering going.
> 
>                 (HV_START_LOW 18446603336221196288)
>                 (FEATURES '!writable_page_tables|pae_pgdir_above_4gb')
>                 (VIRT_BASE 18446744071562067968)
>                 (GUEST_VERSION 2.6)
>                 (PADDR_OFFSET 0)
>                 (GUEST_OS linux)
>                 (HYPERCALL_PAGE 18446744071578849280)
>                 (LOADER generic)
>                 (SUSPEND_CANCEL 1)
>                 (PAE_MODE yes)
>                 (ENTRY 18446744071594476032)
>                 (XEN_VERSION xen-3.0)
> 
> Still i only see:
> 
> [   47.449072] Starting SWIOTLB debug thread.
> [   47.449090] swiotlb_start_thread: Go!
> [   47.449262] xen_swiotlb_start_thread: Go!
> [   52.449158] 0 [ehci_hcd 0000:0a:00.3] bounce: from:432(slow:0)to:1329 
> map:1756 unmap:1781 sync:0

There is bouncing there.
..
> [  172.449102] 0 [ehci_hcd 0000:0a:00.7] bounce: from:3839(slow:0)to:664 
> map:4486 unmap:4617 sync:0

And there.. 3839 of them.
> [  172.449123] 1 [ehci_hcd 0000:0a:00.7] bounce: from:0(slow:0)to:82 map:111 
> unmap:0 sync:0
> [  172.449130] 2 [ehci_hcd 0000:0a:00.7] bounce: from:0(slow:0)to:32 map:36 
> unmap:0 sync:0
> [  172.449170] SWIOTLB is 0% full
> [  177.449109] 0 [ehci_hcd 0000:0a:00.7] bounce: from:5348(slow:0)to:524 
> map:5834 unmap:5952 sync:0

And 5348 here!

So bounce-buffering is definitly happening with this guest.
.. snip..
> 
> And the devices do work ... so how does that work ...

Most (all?) drivers are written to work with bounce-buffering.
That has never been a problem.

The issue as I understand is that the DVB drivers allocate their buffers
from 0->4GB most (all the time?) so they never have to do bounce-buffering.

While the pv-ops one ends up quite frequently doing the bounce-buffering, which
implies that the DVB drivers end up allocating their buffers above the 4GB.
This means we end up spending some CPU time (in the guest) copying the memory
from >4GB to 0-4GB region (And vice-versa).

And I am not clear why this is happening. Hence my thought
was to run an Xen-O-Linux kernel v2.6.3X and a PVOPS v2.6.3X (where X is the
same) with the same PCI device (and the test would entail rebooting the
box in between the launches) to confirm that the Xen-O-Linux is doing something
that the PVOPS is not.

So far, I've haven't had much luck compiling a Xen-O-Linux v2.6.38 kernel
so :-(

> 
> Thx for your explanation so far !

Sure thing.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.