[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] Alternative to Vastsky?



Hi guys,

Having worked with both Infiniband and GlusterFS I would probably advise against using GlusterFS as a VM backing store unless you have it setup to serve say a HA NFS share etc. Native GlusterFS protocol + FUSE + Xen has created nightmares in the past.

On that note however, Infiniband is definitely the technology for building any storage architecture these days. IPoIB is generally fast enough, with SDP you can get some very good throughput. More interesting though is SRP, the native SCSI RDMA Protocol, delivers some very nice speed, only issue is the lack of configuration in the current Linux initiator in mainline.

A really good, cheap and easy to build solution is to use Infiniband as your storage backbone, setup say a Solaris based distribution (Nexenta, OpenIndiana) with ZFS support and then deliver your storage over SRP or iSCSI over IPoIB. Infinibands lower latency paired with ZFS caching (and better yet SSDs in the ZFS pool) is definitely good value for money. Easily out stripping any Ethernet or FC based network.

On 20 April 2011 23:26, Tim Titley <tim@xxxxxxxxxx> wrote:
On 20/04/11 12:33, Henrik Andersson wrote:
Since there is no official support for InfiniBand in XCP and some people have managed to get it working, others not so much, I wanted to tell how far I got with it and hopefully in time, get some sort of howto out of this. All those RDMA possibilities could come handy but I would be more than pleased if I could even use IPoIB.

Hardware and setup I have for this project:

SuperMicro SuperServer (2026TT-H6IBQRF), each of the four nodes comes with integrated Mellanox ConnectX QDR Infiniband 40Gbps Controller w/ QSFP connector. I have cable between two nodes and while testing, I had two ubuntu server 10.04 installations and according to netperf I got unidirecitonal throughput of about 15000Mbit/s on IPoIB between two nodes.

Hmm, I wonder where the problem is. I would have thought that performance would be better than this. I can get near wire speed on a 1Gbit ethernet using iperf, so 15Gbit on a theoretical data throughput of 32Gbit is by no means perfect.


Where I got with IB support is pretty sad. Had OFED compile and install all the drivers and such on ddk after I added some repositorys and installed bunch of different things required. The thing is that after reboot, ddk vm just hangs on boot if I have pci passthrough enabled for the IB card to ddk. If I disable it, everything is fine, since there is no IB card present for the ddk to try and use the drivers with.

I would imagine you would enable the same repos on dom0 to install the dependencies (only the libraries would be required, not the dev packages). Build the drivers on the ddk and change the DESTDIR to install to it's own tree (/opt/ibdrivers for example), and rsync the tree over to the dom0 filesystem once the build had completed.

I have just ordered a couple of InfiniBand HBAs for my own testing purposes so I will know more soon. Let us know how you get on - I can lend a hand (for whatever it's worth) but I zero experience when it comes to building rpms or managing yum repos.


Regards,

Tim

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api



--
Kind regards,
Joseph.

Founder | Director
Orion Virtualisation Solutions
 | www.orionvm.com.au | Phone: 1300 56 99 52 | Mobile: 0428 754 846

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.