[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [Xen-users] RAID10 Array
 
- To: "Adi Kriegisch" <kriegisch@xxxxxxxx>,	<Xen-users@xxxxxxxxxxxxxxxxxxx>
 
- From: "Jonathan Tripathy" <jonnyt@xxxxxxxxxxx>
 
- Date: Thu, 17 Jun 2010 09:08:51 +0100
 
- Cc: 
 
- Delivery-date: Thu, 17 Jun 2010 01:14:57 -0700
 
- List-id: Xen user discussion <xen-users.lists.xensource.com>
 
- Thread-index: AcsN76y+VgwRzniDTgyCobsB9OxZ6wABKVV/
 
- Thread-topic: [Xen-users] RAID10 Array
 
 
 
 
  
From: Adi Kriegisch 
[mailto:kriegisch@xxxxxxxx] Sent: Thu 17/06/2010 08:32 To: 
Jonathan Tripathy Cc: Xen-users@xxxxxxxxxxxxxxxxxxx Subject: 
Re: [Xen-users] RAID10 Array
  
Hi!
  > I have 3 RAID ideas, and I'd appreciate some 
advice on which would be > better for lots of VMs for 
customers. > > My storage server will be able to hold 16 disks. I am 
going to export 1 > iSCSI LUN to each xen node. 6 nodes will connect to 
one storage server, > so that's 6 LUNs per server of equal size. The 
server will connect to a > switch using quad port bonded NICs (802.3ad), 
and each Xen node will > connect to the switch using Dual port bonded 
NICs. hmmm... with one LUN per server you will loose the ability to do 
live migration -- or do I miss something? Some people mention problems 
with bonding more than two NICs for iSCSI as the reordering of the 
commands/packets adds tremendously to latency and load. If you want high 
performance and avoid latency issues you might want to choose 
ATA-over-Ethernet.
  > I'd appreciate any thoughts or ideas on which 
would be best for > throughput/IPOS Your server is a Linux box 
exporting the RAIDs to your Xen servers? Then just take fio and do some 
benchmarking. If you're using software raid than you might want to add RAID5 
to the equation. I'd suggest to measure performance of your RAID system with 
various configurations and then choose which level of isolation gives the 
best performance. I don't think a setup with 6 hot spare disks is 
necessary -- at least not when they're connected to the same server. 
Depending on the quality of your disks 1 to 3 should suffice. With eg. 1 hot 
spare in the server plus some cold spares in your office you should be able 
to survive a broken harddisk. You should also "smartctl -t long" your disks 
frequently (ie once per week) and do more or less permanent resync of your 
raid to be able to detect disk errors early. (The worst case scenario is to 
never check your disks -- then a disk is broken and replaced by a hot/cold 
spare -- and raid resync fails other disks on your array, just because the 
bad blocks are already there...)
  Hope this helps
  -- 
Adi 
------------------------------------------------------------------------------------------------------------------- 
Hi Adi,  
Thanks for the advice! 
  
The RAID controller I'm planning to use is the MegaRAID SAS 9260-4i. The storage server will be built by Broadberry, so 
it will be using Supermicro kit. 
  
As for the O/S on the server, I was thinking of 
using Windows Storage Server actually, however maybe this is a bad idea? You're 
correct about the live migration, however I may implement some sort of 
clustering iSCSI filesystem, however the main issue at the minute is the RAID 
array. 
  
I've heard the same things about bonding 2 vs 4 
NICs as well. 
  
Currently, I'm leaning towards the RAID10 array 
with 14 disks with 2 hot spares 
  
Thanks 
  
Jonathan 
  
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users 
 
    
     |