[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Linux Fiber or iSCSI SAN

On Tue, 06/11/2013 10:27 AM, Nick Khamis <symack@xxxxxxxxx> wrote:
> Hello Everyone,
> Was wondering what people are running these days, and how do they compare
> to the 10,000 dollar SAN boxes. We are looking to build a fiber san using
> IET and glusterFS, and was wondering what kind of luck people where having
> using this approach, or any for that matter.
> Kind Regards,
> Nick.

I've built a number of white box SANs  using everything from OpenSolaris and 
COMSTAR, Open-E, OpenFiler, SCST, IET... etc.using iSCSI and FC. 
I've settled Ubuntu boxes booted via DRBD running SCST OR ESOS. 
From a performance perspective, I have pretty large customer that two XCP pools 
running off a Dell MD3200F using 4GB FC. To compare, I took a Dell 2970 or 
something like that, stuck 8 Seatgate 2.5" Constellation Drives in it, a 4GB 
HBA and installed ESOS on it. 
I never got around to finishing my testing, but the ESOS box can definitely 
keep up and things like LSI cachecade would really help to bring it to a more 
enterprise-level performance with respect to random reads and writes. 
Lastly, there is such an abundance of DIRT CHEAP,  lightly used 4GB FC 
equipment on the market today that I find it interesting that people still 
prefer iSCSI. iSCSI is good if you have 10GBE which is still far to expensive 
per port IMO. However, you can get 2 - 4 port, 4GB FC Hbas on ebay for under 
100 bucks and I generally am able to purchase fully loaded switches (brocade 
200e) for somewhere in the neighborhood of 300 bucks each! 
MPIO with 2 FC ports from an initiator to a decent target can easily saturate 
the link on basic sequential r/w write tests. Not to mention, improved latency, 
access times, etc for random i/o. 

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.