[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] XCP 1.6 Beta: upgrade of RAID 1 installation



I have thought of possibly a better procedure (albeit still a workaround) than the one I proposed in the first email.

(a) make a pool-dump-database and store safely
(b) extract one hard disk and store as a recovery strategy (exist_device2), leaving the other (exist_device1)
(c) create 2 new partitions on a separate temporary device (temp_device), same sized as those live
(d) copy the contents of the existing two partitions (currently /dev/md0 and /dev/md1).  (Incidentally, I think it's of no use to mirror the 2nd partition, as it is only used during installation as a backup, which is only able to use raw partitions, but this is a separate story)
(e) reboot, 
(f) enter the BIOS screen and configure temp_device as the default boot device
(g) continue with boot and confirm that XCP host is working as normal
(h) insert the upgrade ISO media (CD/USB)
(i) reboot
(j) enter the BIOS screen and configure device containing the ISO media as the default boot device
(k) during the installer stage, select temp_device.  
(l) The installer should now recognise an existing installation, and any backups.  Proceed with upgrade as normal
(m) reboot at end, removing installer media
(n) enter the BIOS screen and configure temp_device as the default boot device
(o) on bootup, xsconsole will show that the local SR is unavailable.  Some more steps are needed just to re-set up the md device for local storage.  The SR configuration should still be there.  If no local SR exists, then skip to (t)
(p) mdadm --examine --brief --scan --config=partitions >> /etc/mdadm.conf (this will restore the mdadm configuration, from the md metadata on the partitions in exist_device1
(q) mdadm --assemble /dev/md2 (restart the md device containing the LVM volumes used by the local SR)
(r) xe pbd-plug (attach the storage to the SR)
(s) at this stage you should be able to test that any VMs needing VDIs on the local SR can be started
(t) copy the contents of the filesystems on the temp_device back onto /dev/md0 and /dev/md1
(u) reboot while removing temp_device (or rather shutdown, remove temp_device, start host)
(v) in BIOS screen configure exist_device1 as the boot device
(w) verify that XCP host is running normally
(x) insert 2nd disk exist_device2
(z) mdadm /dev/md<x> --re-add /dev/<exist_device2><partition>

This procedure avoids a completely new installation, while retaining a fallback.  It should also work for a slave.

So far I've tested parts of the above, but not as a complete procedure.  That's my next step.  I'll keep you posted.



On Fri, Nov 30, 2012 at 10:13 AM, George Shuklin <george.shuklin@xxxxxxxxx> wrote:

I'd also love to see at least some procedure taking such installations as a consideration, as we're also using XCP/XS on software RAID1, and every upgrade is in fact a reinstallation, very suboptimal procedure. Perhaps, given the fact XCP doesnt have to be tied to 'supported configuration' as XS does, we could have mdraid support in XCP for installation/reinstallation since so many people use it?

Well, I've gladly do this, but main problem is opensource part. xen-api is pure opensource and source is available on github.

XCP/XenServer installer is not. I mean, there is no published way to do something like 'make xcp-iso' command. Internals of installer is half-python, but no any information about xen-api expectation about files placement in older installation or proper way to do stuff. We internally simply hack original installer ISO to help us with installation procedure over md raid1. It looks kinda ugly and definitively not for 'public'. And I really wants to create it properly...



_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.