[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Test report: Migration from 4.1 to 4.2 works



Migration 4.1 xend -> 4.2 xend
  OK

Migration 4.2 -> 4.1 (xend or xl)
  xend: Fails, guest ends up destroyed
  xl: Fails, xl tries to resume at sender but guest gets BUG (see below)
      This is probably a guest bug?

Migration 4.1 xend -> 4.2 xl
  Needs to be done with xl
  Stop xend on source, which leaves domain running and manipulable by xl
  xl migrate -C /etc/xen/debian.guest.osstest.cfg domain potato-beetle
  Works.

However, xl fails on config files which are missing the final
newline.  This should be fixed for 4.2.

Ian.

  xc: error: Max batch size exceeded (-18). Giving up.: Internal error
  xc: error: Error when reading batch (90 = Message too long): Internal error
  libxl: error: libxl_dom.c:313:libxl__domain_restore_common restoring domain: 
Resource temporarily unavailable
  cannot (re-)build domain: -3
  libxl: error: libxl.c:711:libxl_domain_destroy non-existant domain 6
  migration target: Domain creation failed (code -3).
  libxl: error: libxl_utils.c:363:libxl_read_exactly: file/stream truncated 
reading ready message from migration receiver stream
  libxl: info: libxl_exec.c:118:libxl_report_child_exitstatus: migration target 
process [15654] exited with error status 3
  Migration failed, resuming at sender.

[   37.151396] Setting capacity to 8388608
[   37.151988] Setting capacity to 8388608
[   37.172710] Setting capacity to 2048000
[   90.507105] ------------[ cut here ]------------
[   90.507105] kernel BUG at drivers/xen/events.c:1344!
[   90.507105] invalid opcode: 0000 [#1] SMP 
[   90.507105] last sysfs file: /sys/devices/virtual/net/lo/operstate
[   90.507105] Modules linked in: nbd [last unloaded: scsi_wait_scan]
[   90.507105] 
[   90.507105] Pid: 1299, comm: kstop/0 Not tainted (2.6.32.57 #1) 
[   90.507105] EIP: 0061:[<c121b9e4>] EFLAGS: 00010082 CPU: 0
[   90.507105] EIP is at xen_irq_resume+0xe3/0x2b6
[   90.507105] EAX: ffffffef EBX: 00000000 ECX: deadbeef EDX: c4c8df24
[   90.507105] ESI: 000001ff EDI: 00001ff0 EBP: c4c8df3c ESP: c4c8deec
[   90.507105]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
[   90.507105] Process kstop/0 (pid: 1299, ti=c4c8c000 task=db980000 
task.ti=c4c8c000)
[   90.507105] Stack:
[   90.507105]  c102ce92 c4c8df14 c1752288 c1752228 c1770004 00000000 c4c8df24 
c102c270
[   90.507105] <0> c1771004 c4de1720 c1770004 c102c267 c1464b35 c166fed0 
00000000 00000000
[   90.507105] <0> deadbeef deadbeef 00000003 c6000c14 c4c8df5c c121d011 
00000000 dfc63f5c
[   90.507105] Call Trace:
[   90.507105]  [<c102ce92>] ? __xen_spin_lock+0xcb/0xdf
[   90.507105]  [<c102c270>] ? check_events+0x8/0xc
[   90.507105]  [<c102c267>] ? xen_restore_fl_direct_end+0x0/0x1
[   90.507105]  [<c1464b35>] ? _spin_unlock_irqrestore+0x40/0x43
[   90.507105]  [<c121d011>] ? xen_suspend+0x8c/0xa6
[   90.507105]  [<c1097f4f>] ? stop_cpu+0x7d/0xc9
[   90.507105]  [<c1073582>] ? worker_thread+0x15c/0x1f4
[   90.507105]  [<c1097ed2>] ? stop_cpu+0x0/0xc9
[   90.507105]  [<c107664d>] ? autoremove_wake_function+0x0/0x2f
[   90.507105]  [<c1073426>] ? worker_thread+0x0/0x1f4
[   90.507105]  [<c1076333>] ? kthread+0x5f/0x64
[   90.507105]  [<c10762d4>] ? kthread+0x0/0x64
[   90.507105]  [<c102f4d7>] ? kernel_thread_helper+0x7/0x10
[   90.507105] Code: 0f 0b eb fe 0f b7 40 08 3b 45 c4 74 04 0f 0b eb fe 8b 45 
c4 8d 55 e8 89 5d ec 89 45 e8 b8 01 00 00 00 e8 69 f9 ff ff 85 c0 74 04 <0f> 0b 
eb fe 8b 55 f0 89 55 c0 8b 15 60 a0 7f c1 8b 4d c0 89 34 
[   90.507105] EIP: [<c121b9e4>] xen_irq_resume+0xe3/0x2b6 SS:ESP 0069:c4c8deec
[   90.507105] ---[ end trace c48e0191332db3e4 ]---
[   90.507105] ------------[ cut here ]------------
[   90.507105] WARNING: at kernel/time/timekeeping.c:260 ktime_get+0x21/0xce()
[   90.507105] Modules linked in: nbd [last unloaded: scsi_wait_scan]
[   90.507105] Pid: 0, comm: swapper Tainted: G      D    2.6.32.57 #1
[   90.507105] Call Trace:
[   90.507105]  [<c1061200>] warn_slowpath_common+0x65/0x7c
[   90.507105]  [<c107de3f>] ? ktime_get+0x21/0xce
[   90.507105]  [<c1061224>] warn_slowpath_null+0xd/0x10
[   90.507105]  [<c107de3f>] ktime_get+0x21/0xce
[   90.507105]  [<c14637fa>] ? schedule+0x82d/0x87a
[   90.507105]  [<c108226c>] tick_nohz_stop_sched_tick+0x76/0x387
[   90.507105]  [<c1082633>] ? T.504+0x1d/0x25
[   90.507105]  [<c10827c2>] ? tick_nohz_restart_sched_tick+0x187/0x18f
[   90.507105]  [<c102bb75>] ? xen_safe_halt+0x12/0x1f
[   90.507105]  [<c102dc1b>] cpu_idle+0x27/0x70
[   90.507105]  [<c1449ca1>] rest_init+0x5d/0x5f
[   90.507105]  [<c16dd85f>] start_kernel+0x315/0x31a
[   90.507105]  [<c16dd0a8>] i386_start_kernel+0x97/0x9e
[   90.507105]  [<c16e0cae>] xen_start_kernel+0x557/0x55f
[   90.507105] ---[ end trace c48e0191332db3e5 ]---

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.