[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] XEN HA Cluster with LVM fencing and live migration ? The right way ?
On 2 August 2012 18:19, Herve Roux <vevroux@xxxxxxx> wrote: > Hi, > > > > I am trying to build a rock solid XEN High availability cluster. The > platform is SLES 11 SP1 running on 2 HP DL585 both connected through HBA > fiber channel to the SAN (HP EVA). > > XEN is running smoothly and I’m even amazed with the live migration > performances (this is the first time I have the chance to try it in such a > nice environment). > > XEN apart the SLES heartbeat cluster is running fine as well and they both > interact nicely. > > Where I’m having some doubts is regarding the storage layout. I have tried > several configurations but each time I have to compromise. And here is the > problem, I don’t like to compromise ;) > > > > First I’ve tried to use a SAN LUN per Guest (using directly the multipath dm > device as phy disk ). This is working nicely, live migration works fine, > easy setup even if the multipath.conf can get a bit fussy with the growing > number of LUNs : but no fencing at all, I can start the VM on both node and > this is BAD! > > Then I’ve tried to used cLVM on top of the multipath. I’ve managed to get > cLVM up and running pretty easily in the cluster environment. > > From here to way of thinking: > > 1. One big SR on the SAN split into LV that I can use for my VM. A > huge step forward flexibility, no need to reconfigure the SAN each time… > Still with this solution the SR VG is open in shared mode between the nodes > and I don’t have low level lock of the storage. I can start a VM two time > and this is bad bad bad… > > 2. In order to provide fencing at the LVM level I can take another > approach: 1 VG per volume an open it in exclusive mode. The volume will be > active on one node at a time and I have no risk of data corruption. The > cluster will be in charge of balancing to volume when migrating VM from one > node to the other. But here the live migration is not working, and this S… > > > > I was wondering what approach others have taken and if they is something I’m > missing. > > I’ve looked into the XEN locking system but from my point of view the risk > of dead lock is not ideal as well. From my point of view a DLM XEN locking > system will be a good one, I don’t know if some work have been done in the > domain? > > > > Thanks in advance > > Herve > > > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxx > http://lists.xen.org/xen-users Personally I would steer clear of cLVM. If your SAN provides nice programatic control you can possibly integrate it's ability to do fencing into Pacemaker/Linux-HA stacking using a custom OCF (not a huge amount of work usually). This would give you everything you want but is mainly determined by how "open" your SAN is. Alternatively you can run with no storage layer fencing and make sure you have proper STONITH in play. This is pretty easy with Pacemaker/Corosync stack and can be done in alot of ways. If you are pretty sure of your stacks stability (no deadlocks/kernel oops etc) then you can just use SSH STONITH. However if you want to be really damn sure then you can use IP PDU, USB poweroff etc. It's all about how sure you want to be/how much money/time you have. Joseph. -- CTO | Orion Virtualisation Solutions | www.orionvm.com.au Phone: 1300 56 99 52 | Mobile: 0428 754 846 _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |