|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] iscsi vs nfs for xen VMs
On Wed, Jan 26, 2011 at 11:10:12PM +0100, Christian Zoffoli wrote:
> Il 26/01/2011 22:24, James Harper ha scritto:
> >>
> >> iSCSI tipically has a quite big overhead due to the protocol, FC, SAS,
> >> native infiniband, AoE have very low overhead.
> >>
> >
> > For iSCSI vs AoE, that isn't as true as you might think. TCP offload can
> > take care of a lot of the overhead. Any server class network adapter
> > these days should allow you to send 60kb packets to the network adapter
> > and it will take care of the segmentation, while AoE would be limited to
> > MTU sized packets. With AoE you need to checksum every packet yourself
> > while with iSCSI it is taken care of by the network adapter.
>
> the overhead is 10% on a gigabit link and when you speak about resources
> overhead you have mention also the CPU overhead on the storage side.
>
> If you check the datasheets of brands like emc you can see that the same
> storage platform is sold in iSCSI and FC version ...on the first one you
> can use less than half the servers you can use with the last one.
>
This is mostly because of:
- EMC's crappy iSCSI implementation.
- EMC wants to sell you legacy FC stuff they've invested a lot in.
See dedicated iSCSI enterprise storage like Equallogic.. the way it's meant to
be.
Microsoft and Intel had some press releases around one year ago
demonstrating over one *million* IOPS using a single 10gbit Intel NIC,
on a *single* x86 box, using *software* iSCSI.
> Every new entry level storage is based on std hardware without any hw
> acceleration ...for example EMC AX storages are simply xeon servers.
>
Many of the highend enterprise storage boxes are just normal (x86) hardware.
Check for example NetApp.
The magic is all in *software*.
-- Pasi
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |