[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Linux Fiber or iSCSI SAN

On Tue, Jun 11, 2013 at 11:30 AM, Nick Khamis <symack@xxxxxxxxx> wrote:
Hello Everyone,

I am speaking for everyone when saying that we are really interested in knowing what people are
using in deployment. This would be active/active replicated, block level storage solutions at the:

NAS Level: FreeNAS, OpenFiler (I know it's not linux), IET
FS Level: ZFS, OCFS/2, GFS/2, GlusterFS
Replication Level: DRBD vs GlusterFS
Cluster Level: OpenAIS with Pacemaker etc...

Our hope is for an educated breakdown (i.e., comparisons, benefits, limitation) of different setups, as opposed to
a war of words on which NAS solution is better than the other. Comparing black boxes would also be interesting
at a performance level. Talk about pricing, not so much since we already know that they cost and arm and a leg.

Kind Regards,


There was actually one more level I left out

Hardware Level: PCIe bus (8x 16x V2 etc..), Interface cards (FC and RJ), SAS (Seagate vs WD)

I hope this thread takes off, and individuals interested in the same topic can get some really valuable info.

On a side note, and interesting comment I received was on the risks that are associated with such a custom build, as
well as the lack of flexibility in some sense. We would not build a whitebox for this setup, and would adivse against it
as well. Our approach will be to purchase an IBM, SupwerMicro or whatever with sufficient bays, processing power and
PCI bus. It would be good to discuss what has not worked in the past. How some of the replication level technologies
flopped in some sense. For example how FreeNAS has limited support for clustering, or how high availability OpenFiler
instances scale with very large storage instances etc...

There is also SCST which i've heard about before but did not diverge into it very much. Anyone know how this can
fit in a SAN?


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.