[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Storage Systems for Virtual Disk Images



Thanks, Stephan:

In addition to the information on the SCST Project page, Oak Alley IT has similar results:


Eric Pretorious


From: Stephan Seitz <s.seitz@xxxxxxxxxxxxxxxxxxxxxxxxxx>
To: Eric <epretorious@xxxxxxxxx>
Cc: Xen-users lists.xen.org <xen-users@xxxxxxxxxxxxx>
Sent: Friday, October 17, 2014 4:26 AM
Subject: Re: [Xen-users] Storage Systems for Virtual Disk Images

Hi,

>      * The SCSI Target Framework (STGT/TGT),
>      * The LIO target,
>      * The iSCSI Enterprise Target (IET), and
>      * The SCSI Target Subsystem (SCST).


I expierenced the SCST as the most performant and feature-complete
target. Sadly enough, it's not in kernel upstream, so you'ld have to add
two patches to your kernel to get the best performance.
The SCST Website has a table where you can compare the targets.
http://scst.sourceforge.net/scstvsstgt.html

If you don't want to patch your kernel or do need some native
backend-support (for e.g. Ceph), you'ld better try TGT. Just note that
userland operations in TGT are far slower than SCST.

IET looks to me like some kind of reference-implementation. It doesn't
perform well and has no feature which is more important than the
performance impact.

I've never tried LIO, so I don't know much about it.


You mentioned gluster, ceph and sheepdog. Well, gluster is a clustered
*filesystem*. It has nice features for clustered CIFS and if I recall it
right, also for clustered NFS. It doesn't really scale horizontally,
just read the papers :)

Ceph on the other hand is an object store with the capability of virtual
block devices (rados block device or RBD) which is fully supported by
recent qemu-upstream. If you're able to switch from qemu-traditional to
upstream, it *should* be supported by Xen as well. For KVM, it's
definitely working. If you've got at least three storage-nodes, I'ld go
for it. Ceph scales horizontally, finally limited by your backend
network infrastructure. In a real world scanario: 10GBE or better.


cheers,

- Stephan




_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.