[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Linux Fiber or iSCSI SAN

On Tue, 11 Jun 2013 14:49:24 +0000, "mitch@xxxxxxxxxxxx" <mitch@xxxxxxxxxxxx> wrote:
Hey Nick - just curious and not trying to split hairs - but what $10000 san?

I've seen and build what I'd call a "NAS" in that price range, but in
my mind (maybe not in the formal definition though) , SAN is more -
like a management gui - the ability to manage snapshots, backups, etc.
often managed in multiple chassis - it's more a multi device network
isn't it?


NAS works on FS level (e.g. NFS, CIFS, GlusterFS).

SAN works on block device level (iSCSI, AoE, at a push DRBD, NBD).
Think network attached disk as opposed to network attached file

None of the features you mentioned are specific to a NAS vs. SAN - you
can get either with them (or without them).

I guess the line is blurring though... We use some linux boxes
running DRDB - lots of people seem to be going that route anecdotally
speaking. But lots of tiny points seem to affect performance.

No more so than on any storage system, DAS included. Most admins,
including experienced and competent ones, have never thought about
implications of alignment of structures throughout the storage stack.
The profile of the issue has only been raised slightly recently
with the introduction of disks with 4KB sectors, but even that only
covers one particular layer of the stack, whereas similar issues
apply throughout all layers of the stack (e.g. RAID below the file
system, application above, and sometimes other factors as well).

Have a read here to get the basic gist of it:

In different setups (e.g. ZFS), some of this applies
differently - the only way to get it right is to actually
understand what is going on in every layer - i.e. you have
to be a "full stack engineer".


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.