[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-API] cross-pool migrate, with any kind of storage (shared or local)



Ð ÐÐÐ, 18/07/2011 Ð 11:25 +0100, Dave Scott ÐÐÑÐÑ:

> 
> It seems that DRBD can operate in 3 different synchronization modes:
> 
> 1. fully synchronous: writes are ACK'ed only when written to both disks
> 2. asynchronous: writes are ACK'ed when written to the primary disk (data is 
> somewhere in-flight to the secondary)
> 3. semi-synchronous: writes are ACK'ed when written to the primary disk and 
> in the memory (not disk) of the secondary
> 
> Apparently most people run it in fully synchronous mode over a fast LAN. 
> Provided we could get DRBD to flush outstanding updates and guarantee that 
> the two block devices are identical during the migration downtime when the 
> domain is shutdown, I guess we could use any of these methods. Although if 
> fully synchronous is the most common option, we may want to stick with that?

I do think we can put it as option to operation. If user does not sure
in quality of network/disks it can use 'C' protocol. In default behavior
we can use 'B' and in some cases (dude-you-know-what-you-do) we can
allow to use 'A'.

'B' protocol seems be fine, because /dev/drbd on recipient already have
consistent data, but it uknown is they flushed to underlaying block
device or not. For migration purposes it seems be fine, because it will
be flushed in short time and we can read/write to consistent area.


> > Anyway, it's not exactly a rainy weekend project, so if you want
> > consistent mirroring, there doesn't seem to be anything better than
> > DRBD
> > around the corner.
> 
> It did rain this weekend :) So I've half-written a python module for 
> configuring and controlling DRBD:
> 
> https://github.com/djs55/drbd-manager
> 
> It'll be interesting to see how this performs in practice. For some realistic 
> workloads I'd quite like to measure
> 1. total migration time
> 2. total migration downtime
> 3. ... effect on the guest during migration (somehow)
> 
> For (3) I would expect that continuous replication would slow down guest I/O 
> more during the migrate than explicit snapshot/copy (as if every I/O 
> performed a "mini snapshot/copy") but it would probably improve the downtime 
> (2), since there would be no final disk copy.


> What would you recommend for workloads / measurements?

>From my experience of DRBD usage with 'C' protocol and external
meta-data, in good network environment overhead is almost negligible (I
got 110 Mb/s linear copy instead 115 on single SATA dirve  and 850Mb
instead 1.5 Gb/s on huge RAID0 array - both tests in optic 10G network
with LRO offload and super-jumbo frames).

As good workload for measurement I propose use mkfs. It actually create
a very bad load to storage system (random and sequence operations with
small and big chunks to be written - we tests writes, because read will
happen from local drive only and should not speed down process).

As far as I tests mkfs for certain disk is almost constant time for same
hardware, so any difference in performance will be easily noticeable via
time utility).



_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.