[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] question to DRBD users/experts
> > We're looking at using xen on SLES11SP1 servers in production at remote > sites. We've been testing ocfs2 over dual primary drbd as one of the > storage choices. It runs great, and is certainly more cost affective > than putting SANs at our sites. > > Our big concern is split brain and how to handle that when it happens. > If you have a large, shared storage over drbd with VMs running on either > host, how do you handle a split brain situation from a recovery > standpoint? > > One idea we had is to run multiple ocfs2/drbd's, one for each VM, and we > can pick and choose which way to recover in a split brain. That seems > like it makes it a lot more complex and not sure how successful it would > even be. > > Are others using drbd in production? > What has been your experience? > > Any suggestions are appreciated. Our company standard is SLES, so we > have to use tools in that distro. > I'm using DRBD. I was using LVM2 on a multiple-primary DRBD (eg one big DRBD volume cut into slices with LVM) and when it worked it was fine but it would split brain occasionally (on startup after a crash normally, not just spontaneously) and the CLVM daemon would hang on occasion for no good reason. Now I'm using DRBD on LVM on RAID0 and only multiple-primary where necessary. Each DRBD is formed from an LV on each node. Extra work to create a new DRBD volume (create LV on both nodes then set up the DRBD) but much less likely to go wrong during normal use - it hasn't gone wrong yet after months of use! A better setup though would be a SAN consisting of iSCSI on DRBD in single primary mode (using HA to handle failover if the primary fails) and all the hosts using iSCSI. I don't have enough hardware to make that work though unfortunately. James _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |