[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Signed GPLPV drivers available for download



Hi James,

just FYI: since i updated my nearly one-year old xen-unstable checkout, it 
turned out that my strange GPL-PV Performance issue went away: 
With (gplpv_Vista2008x64_0.11.0.357.msi) i get much better performance with 
GPLPV than without - regardless of PCI-Passthrough.

What i observed though: I get about exactly "half" the performance (Disk-IO) 
in my PV-HVM than i get in my Dom0: In Dom0 i read/write with about 300mb/s , 
in DomU its about 150mb/s;
Also (but not exactly messured) i noticed double the RTT value when pinging 
the DomU in comparsion to the RTT when pinging Dom0;

I wonder what it makes perform more or less exactly "half" as Dom0 ?

Greetings
Tobias

Am Sonntag, 8. April 2012, 15:44:37 schrieb Tobias Geiger:
> Hi James,
> 
> I used "http://www.heise.de/download/as-ssd-benchmark.html"; (sorry -
> german website), but i remember similar results with the famous "HD Tune
> Pro" tool...
> 
> Nowhere here DRBD is involved - sorry; the disk line refers to a phy lvm
> disk which is directly on a block device (ssd).
> 
> But i can test with debug build if you still think this (what?) is an
> issue here - tell me if you want to see that!
> 
> Greetings - and happy Easter!
> Tobias
> 
> Am 06.04.2012 12:56, schrieb James Harper:
> >> glad you ask - because just yesterday i re-tested the 357-version of the
> >> drivers under Windows7 64bit HVM guest - here are the results:
> >> 
> >> (everyting in mb/s)
> >> 
> >> W/O gplpv-drivers and WITH pci-passthrough:
> >> 
> >> Seq. reading: 311.8
> >> Seq. writing:  106.3
> >> 4k    reading: 10.2
> >> 4k    writing:   9.3
> >> 4k64thread reading: 11.1
> >> 4k64threads writing: 10.9
> >> 
> >> 
> >> WITH gplpv-drivers and WITH pci-passthrough:
> >> 
> >> Seq. reading: 98.8
> >> Seq. writing:  31.6
> >> 4k    reading: 12.0
> >> 4k    writing:  19.3
> >> 4k64thread reading: 15.4
> >> 4k64threads writing: 29.6
> >> 
> >> 
> >> WITH gplpv-drivers and WITHOUT pci-passthrough:
> >> 
> >> Seq. reading: 104.4
> >> Seq. writing:   85.2
> >> 4k    reading:  12.7
> >> 4k    writing:   20.4
> >> 4k64thread reading: 16.6
> >> 4k64threads writing: 32.6
> >> 
> >> 
> >> Strange thing is, that even without pci-passthrough the performance with
> >> gplpv is'nt that much better compared to w/o gplpv but with
> >> pci-passthrough - well it is overall a bit better, but just a bit, and
> >> seq. read performance is much worse...
> >> i haven't made a test without gplpv and without passthrough at the same
> >> time - tell me if you want to see how that performs.
> >> 
> >> Greetings!
> >> Tobias
> >> 
> >> P.S.: i'm using phy backend pointing to a LVM device on an SSD for this
> >> tests.
> > 
> > What are you testing this with? I just ran some tests with iometer under
> > 2008R2 and it reminded me of something... DRBD *hates* having
> > outstanding writes to the same block, and complains about it, so gplpv
> > serialises such requests (stalling the queue until the first request is
> > complete). This almost never happens in production but iometer sends
> > such requests frequently, which will significantly impact performance.
> > 
> > You can test with the debug build to see if this is happening with
> > whatever you are testing with - you'll see messages like "Concurrent
> > outstanding write detected". Be aware though that qemu will rate limit
> > writes to the debug log so if it happens a lot (like under iometer) it
> > will slow to a crawl.
> > 
> > James
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.