[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] XEN - networking and performance


  • To: <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: <admin@xxxxxxxxxxx>
  • Date: Fri, 7 Oct 2011 20:27:41 -0500
  • Delivery-date: Fri, 07 Oct 2011 18:29:12 -0700
  • Importance: Normal
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AQHMhFAN5rOzJb1ggECVvcySiYiOp5Vv8jcA///SucCAAEirAIABIJaQgAB7G7A=

We've used SSD drives as caching drives (L2ARC) in ZFS SAN and NAS
solutions.  It is a cost effective way to dramatically improve the
performance of the ZFS systems.  We usually toss 300GB of SSD drives into
the storage systems for caching.  SSD is cheap compared to RAM.

Here is a link:
http://www.zfsbuild.com/2010/07/30/testing-the-l2arc/



-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jeff Sturm
Sent: Friday, October 07, 2011 1:13 PM
To: Simon Hobson; xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] XEN - networking and performance

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Simon Hobson
> Sent: Thursday, October 06, 2011 4:51 PM
>
> Jeff Sturm wrote:
> 
> >One of the traps we've run into when virtualizing moderately I/O-heavy
> >hosts, is not sizing our disk arrays right.  Not in terms of capacity
> >(terabytes) but in spindles.  If each physical host normally has 4
> >dedicated disks, for example, virtualizing 8 of these onto a domU
> >attached to a disk array with 16 drives effectively cuts that ratio
> >from 4:1 down to 2:1.  Latency goes up, throughput goes down.
> 
> Not only that, but you also guarantee that the I/O is across different
areas of the disk
> (different partitions/logical volumes) and so you also virtually guarantee
a lot more
> seek activity.

Very true, yes.  In such an environment, sequential disk performance means
very little.  You need good random I/O throughput and that's hard to get
with mechanical disks, beyond a few thousand iops.  15k disks help, a larger
chassis with more disks helps, but that's just throwing $$$ at the problem
and doesn't really break through the iops barrier.

Anyone tried SSD with good results?  I'm sure capacity requirements can make
it cost-prohibitive for many.

Jeff



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.