[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] Poor xapi Performance Exporting/Importing VM's on XCP 1.6



Ok,

here goes an update on the Problem, which I evaluated in a nightly
session with kind help of Trixboxer from the #xen-api irc-channel on
freenode. I think it's a duplicate of the Problem of another User posted
here:

http://www.gossamer-threads.com/lists/xen/api/258377

There are 2 factors causing the bad performance while importing and even
more exporting VM's via CLI on a shares NFS SR. Here goes our Test Setting:

2 Pools
one XCP1.1, one XCP1.6 - both using the vanilly XCP iso..
both sharing the same iSCSI and Management Network (1GBe)

SAN's used:
1.1. Pool SAN: HP Lefthand P4000 VSA
1.6. Pool SAN: Easyraid Q16+

NAS used:
Same NFS NAS for 1.1 and 1.6 (Dual CPU Xeon E5310 @ 1.60GHz w/ EasyRaid
U320 SCSI Raid) GBe connected

The Nodes used for xe vm-export/import had no domU's running, and the
Monitoring was showing a very low Network Utilization in both the SAN as
well as the Management Network.

>>>>>>>>

1. From XCP1.1 to XCP1.6 the Operating System Default values regarding
NFS Mounts moght have changed. We found out that In 1.1 the default NFS
mount option is "async" in 1.6 the default is "sync".
That causes a very bad performance when writing to NFS NAS's with a
moderate IOPS Performance.

The difference is ~10MB/s on 1.1 compared to 1.5MB/s on 1.6.

To test it I dismounted the xapi mounted NFS SR on a node of the 1.6
Pool and remounted it with "async". Voila, I had compareable Write
speeds to 1.1, nearly 10MB/s..

Still this is far away from performance values I achive when doing a dd
bs=1M to the NFS SAN, there I get about 45MB/s..

But using the async Option is not that perfect Solution for the Problem,
because it increases the possibility to lose more Data in Case of
powerloss or SAN Network Problem, because the NFS Server caches much
more data then with a sync NFS mount.

That leads us to Point 2...

2. I think besides the NFS sync/async Option xapi could do much better
if the blocksize would be increased to a much higher value.

I don't have enough insight into the internal of xapi so I can't tell if
something special has been specified to fwrite(), but if it's default
it's 4k, and that would match the performance Values we got when dd'ing
with bs=4k..

What can we do about it?

greetings,
Bastian


Am 18.12.2012 16:13, schrieb Bastian Mäuser:
> Hi Guys,
>
> i just put a little Cluster in Testing with XCP 1.6... The most strange
> thing i noticed is the really really slow imports and explorts of the
> xen-api...
>
> When migrating VM's from my 1.1 to the 1.6 Cluster i do export them onto
> a NFS Share with one pool, and re-import them on the 1.6 pool.
>
> Export on the 1.1 Cluster is really ok. I think about 200-300mbit/s net,
> maybe the NFS Server is here the limiting Component.
>
> Import/Export in/from the 1.6 Cluster is a hell.
>
> Yesterday it took my awesome 487 Minutes to export a VM of roughly 40gb:
>
> [root@xcp2 615824d1-1aeb-7054-cc07-417ba25c78f0]# time xe vm-export
> vm=rsa2 filename=rsa2-test-unc.xva compress=false
> Export succeeded
>
> real    487m33.284s
> user    0m25.810s
> sys     6m26.960s
>
> [root@xcp2 615824d1-1aeb-7054-cc07-417ba25c78f0]# ls -lah|grep rsa2
> -rw------- 1 nfsnobody nfsnobody  38G 18. Dez 04:50 rsa2-test-unc.xva
>
> Exporting that VM from the Other Cluster takes about 35 Minutes..
>
> Network + Hardware is fine... Inside a VM on the 1.6 Cluster I get about
> 55mb/s Disk performance.. and about 45mb/s to the NFS Server.
>
> I've spend quite some time now to find out where the Bottleneck ist..
>
> Any Ideas on that?
>
> thanks,
> Bastian
>
> _______________________________________________
> Xen-api mailing list
> Xen-api@xxxxxxxxxxxxx
> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
>


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.