[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] xenserver-core tech preview: Hang during pygrub


  • To: John Morris <john@xxxxxxxxxxx>
  • From: Dave Scott <Dave.Scott@xxxxxxxxxx>
  • Date: Mon, 28 Oct 2013 19:43:50 +0000
  • Accept-language: en-GB, en-US
  • Cc: "xen-api@xxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxx>
  • Delivery-date: Mon, 28 Oct 2013 19:46:07 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>
  • Thread-index: AQHO05jU+/9slp8dxEqUnOxym231gJoKTN8AgAATf4CAACR9Rg==
  • Thread-topic: [Xen-API] xenserver-core tech preview: Hang during pygrub

Sorry my repo has become a bit messed up-- I attempted to reclone from the 
xapi-project one but it's all confused now. Probably best to file issues 
directly on the xapi-project one, since that has become the new "upstream"

-- 
Dave Scott

> On Oct 28, 2013, at 6:35 PM, "John Morris" <john@xxxxxxxxxxx> wrote:
> 
> Here's a github issue that's apparently vanished from Dave Scott's repo,
> looks like the same thing:
> 
> http://webcache.googleusercontent.com/search?q=cache:rSRkkPeQ0FoJ:https://github.com/djs55/xenopsd/issues/30+
> 
> Here's someone describing the same issue on ceph-users, where I'll post
> next:
> 
> http://comments.gmane.org/gmane.comp.file-systems.ceph.user/3636
> 
>    John
> 
> 
>> On 10/28/2013 12:23 PM, John Morris wrote:
>>> On 10/27/2013 11:44 PM, John Morris wrote:
>>> This problem crops up occasionally when booting a vm.  I haven't figured
>>> out how to repeat it.
>>> 
>>> Host is Scientific Linux 6; technology preview plus snapshot
>>> repositories; 64-bit Phenom; ceph 0.67.4 from ceph.com repos.
>>> 
>>> During an 'xe vm-start', pygrub attaches a vm's root vdi in the dom0.
>>> The operation never gets as far as detaching them, and the device
>>> entries remain in /dev/xvd*.  The 'vm-start' never completes, and other
>>> xe commands may or may not work properly afterwards.
>>> 
>>> Attempting to mount a /dev/xvd* entry will hang with a kernel oops, and
>>> the mount process cannot be killed.
>> 
>> Another finding in re. log message:
>> 
>>  kernel: vbd vbd-51728: 16 Device in use; refusing to close
>> 
>> The process holding the device open is blkid, started by udevd:
>> 
>>  blkid -o udev -p /dev/xvdb
>> 
>>> Pasted below are relevant logs.
>>> 
>>>    John
>>> 
>>> If the below is illegible, see this link:
>>>  http://pastebin.ca/2472031
>>> 
>>> /var/log/messages:
>>> 
>>> Oct 27 21:19:58 xen1 kernel: blkfront: xvdb: barrier: enabled
>>> Oct 27 21:19:58 xen1 kernel: xvdb: xvdb1 xvdb2
>>> Oct 27 21:19:59 xen1 kernel: vbd vbd-51728: 16 Device in use; refusing
>>> to close
>>> Oct 27 21:19:59 xen1 kernel: qemu-system-i38[2899]: segfault at
>>> 7fac042e4000 ip 00007fac0447b129 sp 00007fffe7028630 error 4 in
>>> qemu-system-i386[7fac042ed000+309000]
>>> Oct 27 21:20:12 xen1 kernel: xapi15: port 1(eth1.81) entered forwarding
>>> state
>>> Oct 27 21:20:52 xen1 xapi: [ info|xen1.zultron.com|363 UNIX
>>> /var/lib/xcp/xapi||cli] xe osd password=null lspools= username=root
>>> Oct 27 21:21:42 xen1 xapi: [ info|xen1.zultron.com|365 UNIX
>>> /var/lib/xcp/xapi|session.login_with_password D:fd591f08855d|xapi]
>>> Session.create trackid=93881075440eba708422d0109c7e4aa7 pool=false
>>> uname=root originator=cli is_local_superuser=true auth_user_sid=
>>> parent=trackid=9834f5af41c964e225f24279aefe4e49
>>> Oct 27 21:21:42 xen1 xapi: [ info|xen1.zultron.com|365 UNIX
>>> /var/lib/xcp/xapi|session.logout D:a573674c2654|xapi] Session.destroy
>>> trackid=93881075440eba708422d0109c7e4aa7
>>> Oct 27 21:22:58 xen1 udevd[585]: worker [5514] unexpectedly returned
>>> with status 0x0100
>>> Oct 27 21:22:58 xen1 udevd[585]: worker [5514] failed while handling
>>> '/devices/vbd-51728/block/xvdb'
>>> 
>>> 
>>> /var/log/debug:
>>> 
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Processing disk
>>> SR=6afd1b62-1c4f-7898-5511-5cc0969a0da9 VDI=cf5/prov0.root.img
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [scheduler] Scheduler sleep
>>> until 1382926825 (another 27 seconds)
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [scheduler] Scheduler sleep
>>> until 1382926825 (another 27 seconds)
>>> Oct 27 21:19:57 xen27 xapi: [ info|xen27.zultron.com|359||storage_impl]
>>> VDI.attach dbg:attach_and_activate dp:xenopsd/task/2
>>> sr:6afd1b62-1c4f-7898-5511-5cc0969a0da9 vdi:cf5/prov0.root.img
>>> read_write:false
>>> Oct 27 21:19:57 xen27 xapi: [debug|xen27.zultron.com|359||storage_impl]
>>> dbg:attach_and_activate dp:xenopsd/task/2
>>> sr:6afd1b62-1c4f-7898-5511-5cc0969a0da9 vdi:cf5/prov0.root.img
>>> superstate:attached  RO
>>> Oct 27 21:19:57 xen27 xapi: [ info|xen27.zultron.com|360||storage_impl]
>>> VDI.activate dbg:attach_and_activate dp:xenopsd/task/2
>>> sr:6afd1b62-1c4f-7898-5511-5cc0969a0da9 vdi:cf5/prov0.root.img
>>> Oct 27 21:19:57 xen27 xapi: [debug|xen27.zultron.com|360||storage_impl]
>>> dbg:attach_and_activate dp:xenopsd/task/2
>>> sr:6afd1b62-1c4f-7898-5511-5cc0969a0da9 vdi:cf5/prov0.root.img
>>> superstate:activated RO
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Device.Vbd.add
>>> (device_number=Xen(1, 0) | params=rbd:cf5/prov0.root.img | phystype=phys)
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] adding device
>>> B0[/local/domain/0/backend/qdisk/0/51728]
>>> F0[/local/domain/0/device/vbd/51728]  H[/xapi/0/hotplug/qdisk/51728]
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [hotplug] Hotplug.wait_for_plug:
>>> frontend (domid=0 | kind=vbd | devid=51728); backend (domid=0 |
>>> kind=qdisk | devid=51728)
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Adding watches for:
>>> frontend (domid=0 | kind=vbd | devid=51728); backend (domid=0 |
>>> kind=qdisk | devid=51728)
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] xenstore watch
>>> /local/domain/0/backend/qdisk/0/51728/kthread-pid
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] xenstore watch
>>> /local/domain/0/backend/qdisk/0/51728/tapdisk-pid
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] xenstore watch
>>> /local/domain/0/backend/qdisk/0/51728/shutdown-done
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] xenstore watch
>>> /local/domain/0/backend/qdisk/0/51728/params
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Watch on backend domid:
>>> 0 kind: qdisk -> frontend domid: 0 devid: 51728
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Unknown device kind:
>>> 'qdisk'
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Watch on backend domid:
>>> 0 kind: qdisk -> frontend domid: 0 devid: 51728
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Unknown device kind:
>>> 'qdisk'
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Watch on backend domid:
>>> 0 kind: qdisk -> frontend domid: 0 devid: 51728
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Unknown device kind:
>>> 'qdisk'
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Watch on backend domid:
>>> 0 kind: qdisk -> frontend domid: 0 devid: 51728
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Unknown device kind:
>>> 'qdisk'
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [scheduler] Scheduler sleep
>>> until 1382926825 (another 27 seconds)
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [hotplug] Synchronised ok with
>>> hotplug script: frontend (domid=0 | kind=vbd | devid=51728); backend
>>> (domid=0 | kind=qdisk | devid=51728)
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [hotplug] Checking to see
>>> whether /local/domain/0/backend/qdisk/0/51728/hotplug-status
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Device.Vbd successfully
>>> added; device_is_online = true
>>> Oct 27 21:19:57 xen27 xenopsd-xenlight: [xenops] Waiting for /dev/xvdb
>>> to appear
>>> Oct 27 21:19:58 xen27 kernel: blkfront: xvdb: barrier: enabled
>>> Oct 27 21:19:58 xen27 xenopsd-xenlight: [bootloader] Bootloader
>>> commandline: /usr/bin/pygrub -q /dev/xvdb
>>> Oct 27 21:19:58 xen27 kernel: xvdb: xvdb1 xvdb2
>>> Oct 27 21:19:59 xen27 xenopsd-xenlight: [xenops] xenstore-write
>>> /local/domain/0/backend/qdisk/0/51728/online = 0
>>> Oct 27 21:19:59 xen27 xenopsd-xenlight: [xenops] Device.del_device
>>> setting backend to Closing
>>> Oct 27 21:19:59 xen27 xenopsd-xenlight: [xenops]
>>> Device.Generic.clean_shutdown_wait frontend (domid=0 | kind=vbd |
>>> devid=51728); backend (domid=0 | kind=qdisk | devid=51728)
>>> Oct 27 21:19:59 xen27 kernel: vbd vbd-51728: 16 Device in use; refusing
>>> to close
>>> Oct 27 21:19:59 xen27 xenopsd-xenlight: [xenops] waiting for backend to
>>> close
>>> Oct 27 21:19:59 xen27 kernel: qemu-system-i38[2899]: segfault at
>>> 7fac042e4000 ip 00007fac0447b129 sp 00007fffe7028630 error 4 in
>>> qemu-system-i386[7fac042ed000+309000]
>>> Oct 27 21:20:01 xen27 CROND[10982]: (root) CMD (/usr/lib64/sa/sa1 1 1)
>> 
>> _______________________________________________
>> Xen-api mailing list
>> Xen-api@xxxxxxxxxxxxx
>> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
> 
> _______________________________________________
> Xen-api mailing list
> Xen-api@xxxxxxxxxxxxx
> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.