[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-bugs] [Bug 1328] live migration with DomU's vcpu = 1
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1328 ------- Comment #2 from ikedaj@xxxxxxxxxxxxxxxxx 2008-08-31 18:27 ------- by the way, even if I set my DomU's vcpu > 1, when DomU has two disk entries like this, disk = [ "phy:/dev/sdf1,xvda,w","phy:/dev/sdf3,/dev/sdf,w", ] the live migration will fail... sometimes it goes well, but mostly fail. ---------------------------------------------------------------------- # cat /etc/xen/dom-ua1 name = "dom-ua1" uuid = "94d0f9b4-08f6-448c-8c9b-91ca999a5f4b" maxmem = 2048 memory = 2048 vcpus = 2 bootloader = "/usr/bin/pygrub" on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ "type=vnc,vncunused=1,keymap=ja" ] disk = [ "phy:/dev/sdf1,xvda,w","phy:/dev/sdf3,/dev/sdf,w", ] vif = [ "ip=192.168.201.146/24,mac=00:16:3e:13:32:27,bridge=xenbr0", "ip=192.168.101.146/24,mac=00:16:3e:13:32:28,bridge=xenbr1", "ip=192.168.102.146/24,mac=00:16:3e:13:32:29,bridge=xenbr2", "ip=192.168.16.146/22,mac=00:16:3e:13:32:30,bridge=xenbr3", ] ---------------------------------------------------------------------- The error messages in /var/log/xen/xend.log are the same. source node ---------------------------------------------------------------------- [2008-08-29 18:34:33 xend 9011] DEBUG (XendCheckpoint:89) [xc_save]: /usr/lib64/xen/bin/xc_save 20 1 0 0 1 [2008-08-29 18:34:33 xend 9011] INFO (XendCheckpoint:351) ERROR Internal error: Couldn't enable shadow mode [2008-08-29 18:34:33 xend 9011] INFO (XendCheckpoint:351) Save exit rc=1 [2008-08-29 18:34:33 xend 9011] ERROR (XendCheckpoint:133) Save failed on domain dom-ua1 (1). Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 110, in save forkHelper(cmd, fd, saveInputHandler, False) File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 339, in forkHelper raise XendError("%s failed" % string.join(cmd)) XendError: /usr/lib64/xen/bin/xc_save 20 1 0 0 1 failed ---------------------------------------------------------------------- destination node ---------------------------------------------------------------------- [2008-08-29 18:36:04 xend 8996] DEBUG (XendCheckpoint:215) [xc_restore]: /usr/lib64/xen/bin/xc_restore 4 2 1 2 0 0 0 [2008-08-29 18:36:07 xend 8996] INFO (XendCheckpoint:351) ERROR Internal error: read: p2m_size [2008-08-29 18:36:07 xend 8996] INFO (XendCheckpoint:351) Restore exit with rc=1 [2008-08-29 18:36:07 xend.XendDomainInfo 8996] DEBUG (XendDomainInfo:1560) XendDomainInfo.destroy: domid=2 [2008-08-29 18:36:07 xend.XendDomainInfo 8996] DEBUG (XendDomainInfo:1568) XendDomainInfo.destroyDomain(2) [2008-08-29 18:36:07 xend.XendDomainInfo 8996] ERROR (XendDomainInfo:1575) XendDomainInfo.destroy: xc.domain_destroy failed. Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 1573, in destroyDomain xc.domain_destroy(self.domid) Error: (3, 'No such process') [2008-08-29 18:36:07 xend 8996] ERROR (XendDomain:278) Restore failed Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py", line 273, in domain_restore_fd return XendCheckpoint.restore(self, fd) File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 219, in restore forkHelper(cmd, fd, handler.handler, True) File "/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py", line 339, in forkHelper raise XendError("%s failed" % string.join(cmd)) XendError: /usr/lib64/xen/bin/xc_restore 4 2 1 2 0 0 0 failed ---------------------------------------------------------------------- -- Configure bugmail: http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. _______________________________________________ Xen-bugs mailing list Xen-bugs@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-bugs
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |