[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Xen-users] Unexpected memory bandwidth overhead with Xen
- To: xen-users@xxxxxxxxxxxxxxxxxxx
- From: rohan nigam <locaterohan@xxxxxxxxx>
- Date: Mon, 1 Nov 2010 08:59:13 -0700 (PDT)
- Delivery-date: Mon, 01 Nov 2010 09:01:38 -0700
- Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:MIME-Version:Content-Type; b=uhMsCQpjYYugsFhBrd36FkkYjlaw6xJO3v4YpFNMjRcgWT+V1vX7NM94dwVlXx3/Z25jIeUXtPYspUtG155A8uTdmdGyD8fo9TII0M2sXkZq0hhGL6BNFd7RD7LVBe5m9fDt4lugxg1WB7mNYLTGB+59PHk7iQe2sLCECI0/mYo=;
- List-id: Xen user discussion <xen-users.lists.xensource.com>
Hello Everyone,
While benchmarking a Dell Node (Dual Socket Quad Core AMD Opteron 2354 with 8 GB of memory & CentOS 5.4 installed) , I am getting approx 25-30% overhead with xen after running the STREAM memory bandwidth benchmark using 8 threads with different compilers (gcc, pgi and icc).
These are the results with and without running xen kernel. Centos: Kernel Version: 2.6.18-194.17.1.el5 Kernel Version: 2.6.18-194.17.1.el5xen
Function |
pgcc |
pgcc-xen |
Variation (%) |
Copy |
17108.0186 |
11827.4346
|
30.86613432 |
Scale |
16179.1128 |
11692.2545 |
27.7324125 |
Add |
16706.097 |
12212.2696 |
26.89932544 |
Triad |
17211.552 |
12936.0666 |
24.84078949 |
|
|
|
|
Function |
icc |
icc-xen |
Variation (%) |
Copy |
16731.7459 |
11743.8163 |
29.81117231 |
Scale |
16026.4231 |
11560.9261 |
27.86334151 |
Add |
16656.4325 |
12077.4699 |
27.49065624 |
Triad |
16701.5193 |
12021.5076 |
28.02147287 |
|
|
|
|
Function |
gcc |
gcc-xen |
Variation (%) |
Copy |
11762.6266 |
8810.7558 |
25.09533712 |
Scale |
11499.5329 |
8611.5352 |
25.11404355 |
Add |
12399.1248 |
9388.1446 |
24.28381235 |
Triad |
12607.9727 |
9531.4749 |
24.40120924 |
I know there are new kernels available but I never heard of such a bad performance for bandwidth numbers for Xen. Does anyone of you know the reason?
Also, HPL on the other hand with the same setup gave a reasonably expected performance overhead for Gflops count. It was approx 3.9-4% overhead with xen.
Does Xen behave like this with memory intensive applications?
Any comments and suggestions will really help.
Thanks, Rohan
|
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|