[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] Domain-0 freezes when mounting a cloned domU with dd
Hello, Because I use LVM for creation of logical volumes for DomU, I make the backup so I shutdown the DomU, then create a snapshot volume and then start again the DomU... Then I can dd the snapshotted volume with the consistent state... At least I hope this is the safe and clear way how to do it, because the LVM subsystem has to handle itself such a problematic situations you write about and should "somehow" perform the flush of all caches before the snapshot is created, or - even if it would not be written everything to the disk - LVM should make it transparent to You also without having all data really written to the "iron" device... At least I do not know abot nobody who would say something different :-)... If You do not use LVM, I do not know how to flush the caches, but I'm sure somewhere must be someutility which is able to do it... Maybe (?) at least if You try to mount the device from Dom0 after DomU has been shut down I think this could help - I canot imagine the volume could be mounted in the inconsistent state - but it is only my feeling and I am no Linux guru :-) With regards Archie -----Original Message----- From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Franck ELIE Sent: Monday, August 20, 2007 5:25 PM To: xen-users@xxxxxxxxxxxxxxxxxxx Subject: Re: [Xen-users] Domain-0 freezes when mounting a cloned domU with dd Franck ELIE a écrit : > Hi > > When cloning à domU with 'dd' command (after a domU shutdown), it > seems to be very important to perform a 'fsck' on cloned partitions > before to mount because if we don't, it freezes Domain-0 and next dom-U. > > Just before the kernel froze, logs are full of such message : > kernel: > >>ext3_orp>ext3_orph>ext3_orph>ext3_orph>ext3_orpha>ext3_orp>ext3_orpha>ext3 _or>ext3_or>ext3_orph>e > > xt3_orp>ext3_orp>ext3_orph>ext3_or>ext3_orpha>ext3_or>ext3_orp>ext3_or>ext3_ orp>ext3_orph>ext3_orpha>ext3_or>ext3_orp>ext3_orp>ext3_ > or>ext3_orphan>ext3_orphan_>ext3_orpha>ext3_orp>ext3_orp>ext3_orph>ext3_orph an>ext3_orp>ext3_orp>ext3_orph> > .... > > Note that when performing a fdisk on such partitions I obtain this > output : > # fsck.ext3 /dev/mapper/CMS1 > e2fsck 1.40-WIP (14-Nov-2006) > /dev/mapper/CMS1: recovering journal > Truncating orphaned inode 32474 (uid=101, gid=104, mode=0140777, size=0) > /dev/mapper/CMS1: clean, 37928/221312 files, 250275/441779 blocks > > By the way, fsck on the template domU seems to be OK. > > So, here are my questions : > - Do you know why is it necessary to perform fsck ? Should I use > specific options with dd to avoid this effect on the cloned FS ? > - Concerning the kernel freeze (that seems not to be specific to my > distrib or xen), is it a kernel bug ? > > Regards > > Franck > > P.S.: xen 3.03, kernel 2.6.18-4-xen-686, Debian Etch, 1 domU per LUN > of 2Go, no LVM > Here are the result of tests I just performed to understand what 's wrong : - there is a MD5 difference between the sum computed from LUN of the domU storage and the image file generated with 'dd' WHEN the backup is performed just after the domU shutdown (less than 3min). - domain-0 have enough memory (8Go) to copy the domU (2Go) So, my FS trouble would to be related with a partial copy of domU that seems to be performed with 'dd' before before the end of the memory flush away to disk domU. So do you know how to perform the memory flush related to a domU in order to backup the disk correctly ? Any suggestion would be appreciate Regards Franck _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users __________ Informace od NOD32 2470 (20070819) __________ Tato zprava byla proverena antivirovym systemem NOD32. http://www.nod32.cz _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |