[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] AW: [Xen-API] cross-pool migrate, with any kind of storage (shared or local)
If there is a plan to integrate drbd, it would be a good idea to use this for HA, too. DRBD could be the key element to store data on two repositories at the same time - synchronous (or not). Maybe it would be a better cross-pool migration process to set up a synchronized image over two storage repos and then do a manual failover to the destination storage repo. DRBD with async-writes would be also suitable for backups. DRBD tracks changes in a bitmap and will only transmit the changes if you reconnect your slave. Like this you can do daily "backups" (with no guarantee for a consistent filesystem) by attaching your slaves at night and disconnect them after the discs are synced. -----UrsprÃngliche Nachricht----- Von: xen-api-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-api-bounces@xxxxxxxxxxxxxxxxxxx] Im Auftrag von Dave Scott Gesendet: Mittwoch, 13. Juli 2011 18:15 An: 'Marco Sinhoreli' Cc: xen-api@xxxxxxxxxxxxxxxxxxx Betreff: RE: [Xen-API] cross-pool migrate, with any kind of storage (shared or local) Hi Marco, > Hello Dave: > > Great initiative this news. What about implement this to shared > storage protocol like NFS for example? There are some plan to do that > or the iscsi protocol will be the unique way to implement cross-pool > migration? My initial thought was to use iscsi internally for the storage migration -- it would be invisible from the user and would work even if the disks were actually on NFS (or any other storage type). However it looks like DRBD might be simpler, since it's already capable of tracking writes and building a storage mirror. I'll check that out first. Cheers, Dave > > Cheers, > > On Wed, Jul 13, 2011 at 11:22 AM, Dave Scott > <Dave.Scott@xxxxxxxxxxxxx> > wrote: > > And my mail client sent this before I'd written the last line which > is > > > > "Any comments are welcome!" > > > > Thanks, > > Dave > > > >> -----Original Message----- > >> From: xen-api-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-api- > >> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Dave Scott > >> Sent: 13 July 2011 15:22 > >> To: xen-api@xxxxxxxxxxxxxxxxxxx > >> Subject: [Xen-API] cross-pool migrate, with any kind of storage > (shared > >> or local) > >> > >> Hi, > >> > >> I've created a page on the wiki describing a new migration protocol > for > >> xapi. The plan is to make migrate work both within a pool and > >> across pools, and to work with and without storage i.e. > >> transparently > migrate > >> storage if necessary. > >> > >> The page is here: > >> > >> http://wiki.xensource.com/xenwiki/CrossPoolMigration > >> > >> The rough idea is to: > >> 1. use an iSCSI target to export disks from the receiver to the > >> transmitter 2. use tapdisk's log dirty mode to build a continuous > >> disk copy > program > >> -- perhaps we should go the full way and use the tapdisk block > mirror > >> code to establish a full storage mirror? > >> 3. use the VM metadata export/import to move the VM metadata > >> between pools > >> > >> I'd also like to > >> * make the migration code unit-testable (so I can test the failure > >> paths easily) > >> * make the code more robust to host failures by host heartbeating > >> * make migrate properly cancellable > >> > >> I've started making a prototype-- so far I've written a simple > python > >> wrapper around the iscsi target daemon: > >> > >> https://github.com/djs55/iscsi-target-manager > >> > >> _______________________________________________ > >> xen-api mailing list > >> xen-api@xxxxxxxxxxxxxxxxxxx > >> http://lists.xensource.com/mailman/listinfo/xen-api > > > > _______________________________________________ > > xen-api mailing list > > xen-api@xxxxxxxxxxxxxxxxxxx > > http://lists.xensource.com/mailman/listinfo/xen-api > > > > > > -- > Marco Sinhoreli _______________________________________________ xen-api mailing list xen-api@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/mailman/listinfo/xen-api
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |