| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
 Re: [Xen-users] XCP : Failed load VDI information on NFS shared repo
 
 | Many thanks for your replies. 
 Grant, I don't think its an inherent problem with the SR as I have
    other VMs operating on it no problem - it is just a couple of them
    giving trouble at the moment.
 
 David,
 
 That is my suspicion too, but I'm not sure where to start repairing.
 
 I've discovered a second VM displaying the same symptoms, this is
    just an xvp appliance so I can merrily break it if breaking it
    enables me to fix the vm in my original mail... Just thought I'd
    mention that in case you were comparing the logs below with the
    original post and saw different uuids, names etc.
 
 SM Log for the call is as follows :
 
 [3104] 2011-09-12 13:12:04.988006       lock: acquired
    /var/lock/sm/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/sr
 [3104] 2011-09-12 13:12:04.990879       ['/usr/sbin/td-util',
    'query', 'vhd', '-vpf',
'/var/run/sr-mount/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/f2c83f05-ee59-4a49-ad5d-2ecdfd6a8ca1.vhd']
 [3104] 2011-09-12 13:12:05.013444       FAILED: (rc 22) stdout:
    '102400
 failed getting parent: -22
 hidden: 0
 ', stderr: ''
 [3104] 2011-09-12 13:12:05.044769       Raising exception [65,
    Failed to load VDI [opterr=Failed load VDI information
/var/run/sr-mount/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/f2c83f05-ee59-4a49-ad5d-2ecdfd6a8ca1.vhd]]
 [3104] 2011-09-12 13:12:05.044944       lock: released
    /var/lock/sm/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/sr
 [3104] 2011-09-12 13:12:05.046128       ***** vdi_attach: EXCEPTION
    SR.SROSError, Failed to load VDI [opterr=Failed load VDI information
/var/run/sr-mount/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/f2c83f05-ee59-4a49-ad5d-2ecdfd6a8ca1.vhd]
 File "/opt/xensource/sm/SRCommand.py", line 94, in run
 return self._run_locked(sr)
 File "/opt/xensource/sm/SRCommand.py", line 125, in _run_locked
 target = sr.vdi(self.vdi_uuid)
 File "/opt/xensource/sm/NFSSR", line 213, in vdi
 return NFSFileVDI(self, uuid)
 File "/opt/xensource/sm/VDI.py", line 99, in __init__
 self.load(uuid)
 File "/opt/xensource/sm/FileSR.py", line 376, in load
 opterr='Failed load VDI information %s' % self.path)
 File "/opt/xensource/sm/xs_errors.py", line 49, in __init__
 raise SR.SROSError(errorcode, errormessage)
 
 [3104] 2011-09-12 13:12:05.046441       lock: closed
    /var/lock/sm/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/sr
 
 messages just has one line for the call as follows :
 
 Sep 12 13:16:28 xen01 xapi: [error|xen01|187930 inet-RPC|VM.start_on
    R:54f73923094c|xapi] Memory F 13309484 KiB S 0 KiB T 32766 MiB
 
 xensource log is as follows for the event :
 
 [20110912T12:22:01.648Z|debug|xen01|188035 unix-RPC||cli] xe
    vm-start name-label=xvpappliance on=xen01 username=root
    password=null
 [20110912T12:22:01.653Z| info|xen01|188035
    unix-RPC|session.login_with_password D:3889bc894187|xapi]
    Session.create trackid=9d2394cc4c3c07700fec4b221c0593c4 pool=false
    uname=root is_local_superuser=true auth_user_sid=
    parent=trackid=9834f5af41c964e225f24279aefe4e49
 [20110912T12:22:01.656Z|debug|xen01|188036
    unix-RPC||dummytaskhelper] task dispatch:session.get_uuid
    D:b827cde7e378 created by task D:3889bc894187
 [20110912T12:22:01.727Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|audit] VM.start_on: VM =
    'e08866f7-a4d5-f5c2-edfd-dffc23beceb8 (xvpappliance)'; host
    '30a88fb8-6af8-46b5-8605-7f64db400ee4 (xen01)'
 [20110912T12:22:01.729Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] No operations are valid because
    current-operations = [
    OpaqueRef:5ecc0b72-1dea-5f18-4cd2-eb22be407a8f -> attach ]
 [20110912T12:22:01.730Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at db_cache_types.ml:75.27-76 ->
    db_cache_types.ml:118.2-40 -> pervasiveext.ml:22.2-9
 [20110912T12:22:01.735Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] host xen01; available_memory = 30781935616;
    memory_required = 272629760
 [20110912T12:22:01.735Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi_ha_vm_failover]
    assert_configuration_change_preserves_ha_plan c =
    configuration_change = { old_vms_leaving = [  ]; new_vms_arriving =
    [ a748ce74 (xen01) 78d1c316 (xvpappliance) ]; hosts_to_disable = [ 
    ]; num_failures = no change; new_vms = [  ] }
 [20110912T12:22:01.741Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|thread_queue] push(vm_lifecycle_op, VM.start
    VM.start_on R:5ecc0b721dea): queue = [ VM.start VM.start_on
    R:5ecc0b721dea ](1)
 [20110912T12:22:01.741Z|debug|xen01|957||thread_queue]
    pop(vm_lifecycle_op) = VM.start VM.start_on R:5ecc0b721dea
 [20110912T12:22:01.743Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|locking_helpers] Acquired lock on VM
    OpaqueRef:78d1c316-5765-d95a-e2d1-fb0cec16a7c4 with token 49
 [20110912T12:22:01.743Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] start: making sure the VM really is halted
 [20110912T12:22:01.743Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] start: checking that VM can run on this host
 [20110912T12:22:01.745Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] start: bringing up domain in the paused state
 [20110912T12:22:01.745Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|memory_control] reserve_memory_range min=266240
    max=266240
 [20110912T12:22:01.745Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenstore-rpc] Checking pid 7203
 [20110912T12:22:01.747Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenstore-rpc] Written request using id:
    c30f6136-a243-f9bd-b75b-6dfa78173255
 [20110912T12:22:01.747Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] watch: watching xenstore paths: [
    /squeezed/rpc/response/reserve-memory-range/c30f6136-a243-f9bd-b75b-6dfa78173255
    ] with timeout 1200.000000 seconds
 [20110912T12:22:01.757Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|memory_control] reserve_memory_range actual = 266240
 [20110912T12:22:01.757Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|memory_control] reserved_memory = 266240; min =
    266240; max = 266240
 [20110912T12:22:01.757Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] VM will have 1 hotplugged vcpus out of 1
 [20110912T12:22:01.757Z| info|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Memory free = 13309592; scrub = 0
 [20110912T12:22:01.758Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] Regenerating the xenstored tree under:
    [/local/domain/18]
 [20110912T12:22:01.764Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] Created domain with id: 18
 [20110912T12:22:01.764Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Created domain with domid: 18
 [20110912T12:22:01.764Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|memory_control] transfer_reservation_to_domain
    a7ac5907-2820-ec8b-de21-b2521e461134 -> 18
 [20110912T12:22:01.764Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenstore-rpc] Checking pid 7203
 [20110912T12:22:01.766Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenstore-rpc] Written request using id:
    c1daa892-24b0-9ea0-e7bd-5c667f5afaa2
 [20110912T12:22:01.766Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] watch: watching xenstore paths: [
    /squeezed/rpc/response/transfer-reservation-to-domain/c1daa892-24b0-9ea0-e7bd-5c667f5afaa2
    ] with timeout 1200.000000 seconds
 [20110912T12:22:01.772Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|memory_control] delete_reservation
    a7ac5907-2820-ec8b-de21-b2521e461134
 [20110912T12:22:01.772Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenstore-rpc] Checking pid 7203
 [20110912T12:22:01.774Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenstore-rpc] Written request using id:
    ac98c903-5f74-9e3c-2fe6-800b927317ea
 [20110912T12:22:01.774Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] watch: watching xenstore paths: [
    /squeezed/rpc/response/delete-reservation/ac98c903-5f74-9e3c-2fe6-800b927317ea
    ] with timeout 1200.000000 seconds
 [20110912T12:22:01.777Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Verifying VDI records exist
 [20110912T12:22:01.777Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] set machine address size dom18 to 36 bits
 [20110912T12:22:01.777Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] creating kernel
 [20110912T12:22:01.779Z|debug|xen01|188039
    unix-RPC||dummytaskhelper] task dispatch:session.slave_login
    D:f5c56a9e8395 created by task R:5ecc0b721dea
 [20110912T12:22:01.782Z| info|xen01|188039
    unix-RPC|session.slave_login D:e44606796cb5|xapi] Session.create
    trackid=b7aea96e08d6354f7b3cb6b2f7b2d024 pool=true uname=
    is_local_superuser=true auth_user_sid=
    parent=trackid=9834f5af41c964e225f24279aefe4e49
 [20110912T12:22:01.785Z|debug|xen01|188040
    unix-RPC||dummytaskhelper] task dispatch:session.get_uuid
    D:3ef100522433 created by task D:e44606796cb5
 [20110912T12:22:01.791Z|debug|xen01|188041
    unix-RPC||dummytaskhelper] task dispatch:VBD.create D:c9808a91c978
    created by task R:5ecc0b721dea
 [20110912T12:22:01.794Z| info|xen01|188041
    unix-RPC|dispatch:VBD.create D:c9808a91c978|taskhelper] task
    VBD.create R:ad6ec37bb77c
    (uuid:b43f0be3-cc14-7e08-2fec-da0ea9b7cd83) created
    (trackid=b7aea96e08d6354f7b3cb6b2f7b2d024) by task R:5ecc0b721dea
 [20110912T12:22:01.794Z|debug|xen01|188041 unix-RPC|VBD.create
    R:ad6ec37bb77c|audit] VBD.create: VM =
    '4ec9bdb8-f506-442f-867c-92bdb27cbfc2 (Control domain on host:
    xen01)'; VDI = 'f2c83f05-ee59-4a49-ad5d-2ecdfd6a8ca1'
 [20110912T12:22:01.794Z|debug|xen01|188041 unix-RPC|VBD.create
    R:ad6ec37bb77c|xapi] Raised at db_cache_types.ml:75.27-76 ->
    db_cache_types.ml:118.2-40 -> pervasiveext.ml:22.2-9
 [20110912T12:22:01.796Z|debug|xen01|188041 unix-RPC|VBD.create
    R:ad6ec37bb77c|xapi] VBD.create (device = 0; uuid =
    364b066b-ccaa-b312-f79c-e881da1c8a81; ref =
    OpaqueRef:51a0f891-852f-6064-041e-43928f448989)
 [20110912T12:22:01.799Z|debug|xen01|188041 unix-RPC|VBD.create
    R:ad6ec37bb77c|xapi] Raised at db_cache_types.ml:75.27-76 ->
    db_cache_types.ml:118.2-40 -> pervasiveext.ml:22.2-9
 [20110912T12:22:01.803Z|debug|xen01|188042
    unix-RPC||dummytaskhelper] task dispatch:VBD.get_uuid D:2261910bf682
    created by task R:5ecc0b721dea
 [20110912T12:22:01.807Z|debug|xen01|188043
    unix-RPC||dummytaskhelper] task dispatch:VM.get_uuid D:eb246c01b688
    created by task R:5ecc0b721dea
 [20110912T12:22:01.810Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] created VBD (uuid
    364b066b-ccaa-b312-f79c-e881da1c8a81); attempting to hotplug to VM
    (uuid: 4ec9bdb8-f506-442f-867c-92bdb27cbfc2)
 [20110912T12:22:01.812Z|debug|xen01|188044
    unix-RPC||dummytaskhelper] task dispatch:VBD.plug D:4a0ea10c8fe8
    created by task R:5ecc0b721dea
 [20110912T12:22:01.815Z| info|xen01|188044
    unix-RPC|dispatch:VBD.plug D:4a0ea10c8fe8|taskhelper] task VBD.plug
    R:6147287d2941 (uuid:20d8b42b-d7b5-2b48-4211-704a6d82f12f) created
    (trackid=b7aea96e08d6354f7b3cb6b2f7b2d024) by task R:5ecc0b721dea
 [20110912T12:22:01.815Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|audit] VBD.plug: VBD =
    '364b066b-ccaa-b312-f79c-e881da1c8a81'
 [20110912T12:22:01.816Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] Raised at db_cache_types.ml:75.27-76 ->
    db_cache_types.ml:118.2-40 -> pervasiveext.ml:22.2-9
 [20110912T12:22:01.816Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] Raised at db_cache_types.ml:75.27-76 ->
    db_cache_types.ml:118.2-40 -> pervasiveext.ml:22.2-9
 [20110912T12:22:01.816Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|locking_helpers] Acquired lock on VM
    OpaqueRef:38786ebf-38e6-97b0-7a31-333ddf8dea49 with token 50
 [20110912T12:22:01.816Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] vbd_plug: attempting to attach vbd
 [20110912T12:22:01.816Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] Attempting to dynamically attach VBD to domid 0
 [20110912T12:22:01.816Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|sm] SM nfs vdi_attach
    sr=OpaqueRef:2fb2c4e1-43f1-1290-f005-6bed602a3531
    vdi=OpaqueRef:e8cdc9a8-4975-20d7-dce8-71887c184a91 writable=false
 [20110912T12:22:01.818Z|debug|xen01|188044 unix-RPC|sm_exec
    D:f965d1e6b383|xapi] Raised at db_cache_types.ml:75.27-76 ->
    db_cache_types.ml:118.2-40 -> pervasiveext.ml:22.2-9
 [20110912T12:22:01.821Z| info|xen01|188044 unix-RPC|sm_exec
    D:7b8969b683cd|xapi] Session.create
    trackid=e1dd4867640158cddca03b3de87ddb32 pool=false uname=
    is_local_superuser=true auth_user_sid=
    parent=trackid=9834f5af41c964e225f24279aefe4e49
 [20110912T12:22:01.823Z|debug|xen01|188045
    unix-RPC||dummytaskhelper] task dispatch:session.get_uuid
    D:88796cb2eac4 created by task D:7b8969b683cd
 [20110912T12:22:01.920Z|debug|xen01|188046
    unix-RPC||dummytaskhelper] task dispatch:host.get_other_config
    D:1220d380196e created by task R:6147287d2941
 [20110912T12:22:03.191Z|debug|xen01|188044 unix-RPC|sm_exec
    D:7b8969b683cd|xapi] Raised at sm_exec.ml:213.7-69 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.192Z| info|xen01|188044 unix-RPC|sm_exec
    D:7b8969b683cd|xapi] Session.destroy
    trackid=e1dd4867640158cddca03b3de87ddb32
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|sm_exec
    D:7b8969b683cd|backtrace] Raised at pervasiveext.ml:26.22-25 ->
    server_helpers.ml:74.11-23
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|sm_exec
    D:7b8969b683cd|dispatcher] Server_helpers.exec exception_handler:
    Got exception SR_BACKEND_FAILURE_65: [ ; Failed to load VDI
    [opterr=Failed load VDI information
    /var/run/sr-mount/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/f2c83f05-ee59-4a49-ad5d-2ecdfd6a8ca1.vhd]; 
    ]
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|sm_exec
    D:7b8969b683cd|dispatcher] Raised at string.ml:150.25-34 ->
    stringext.ml:108.13-29
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|sm_exec
    D:7b8969b683cd|backtrace] Raised at string.ml:150.25-34 ->
    stringext.ml:108.13-29
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|sm_exec
    D:7b8969b683cd|xapi] Raised at server_helpers.ml:92.14-15 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|sm_exec
    D:7b8969b683cd|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] Raised at pervasiveext.ml:26.22-25 ->
    sm.ml:152.23-96 -> storage_access.ml:191.6-152 ->
    threadext.ml:20.23-27 -> threadext.ml:20.65-68 ->
    storage_access.ml:271.2-39 -> xapi_vbd.ml:41.1-47 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|locking_helpers] Released lock on VM
    OpaqueRef:38786ebf-38e6-97b0-7a31-333ddf8dea49 with token 50
 [20110912T12:22:03.192Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] Raised at pervasiveext.ml:26.22-25 ->
    message_forwarding.ml:232.27-46 -> pervasiveext.ml:22.2-9
 [20110912T12:22:03.193Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] Raised at db_cache_types.ml:75.27-76 ->
    db_cache_types.ml:118.2-40 -> pervasiveext.ml:22.2-9
 [20110912T12:22:03.193Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|backtrace] Raised at pervasiveext.ml:26.22-25 ->
    message_forwarding.ml:3026.6-194 -> rbac.ml:230.16-23
 [20110912T12:22:03.194Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|backtrace] Raised at rbac.ml:239.10-15 ->
    server_helpers.ml:77.11-41
 [20110912T12:22:03.194Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|dispatcher] Server_helpers.exec exception_handler:
    Got exception SR_BACKEND_FAILURE_65: [ ; Failed to load VDI
    [opterr=Failed load VDI information
    /var/run/sr-mount/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/f2c83f05-ee59-4a49-ad5d-2ecdfd6a8ca1.vhd]; 
    ]
 [20110912T12:22:03.194Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|dispatcher] Raised at string.ml:150.25-34 ->
    stringext.ml:108.13-29
 [20110912T12:22:03.194Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|backtrace] Raised at string.ml:150.25-34 ->
    stringext.ml:108.13-29
 [20110912T12:22:03.194Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] Raised at server_helpers.ml:92.14-15 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.195Z|debug|xen01|188044 unix-RPC|VBD.plug
    R:6147287d2941|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.195Z|debug|xen01|188044
    unix-RPC|dispatch:VBD.plug D:4a0ea10c8fe8|backtrace] Raised at
    pervasiveext.ml:26.22-25 -> server_helpers.ml:152.10-106 ->
    server.ml:26150.19-167 -> server_helpers.ml:118.4-7
 [20110912T12:22:03.198Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at client.ml:6.37-75 ->
    client.ml:9082.12-61 -> list.ml:69.12-15 ->
    attach_helpers.ml:63.7-787 -> pervasiveext.ml:22.2-9
 [20110912T12:22:03.201Z|debug|xen01|188047
    unix-RPC||dummytaskhelper] task dispatch:session.slave_login
    D:aa0468e635a0 created by task R:5ecc0b721dea
 [20110912T12:22:03.204Z| info|xen01|188047
    unix-RPC|session.slave_login D:96a7944a8155|xapi] Session.create
    trackid=1533c84883b45b2892655b2656c4b2b1 pool=true uname=
    is_local_superuser=true auth_user_sid=
    parent=trackid=9834f5af41c964e225f24279aefe4e49
 [20110912T12:22:03.207Z|debug|xen01|188048
    unix-RPC||dummytaskhelper] task dispatch:session.get_uuid
    D:c5ba8309fd09 created by task D:96a7944a8155
 [20110912T12:22:03.212Z|debug|xen01|188049
    unix-RPC||dummytaskhelper] task dispatch:VBD.unplug D:c02fcb41ad59
    created by task R:5ecc0b721dea
 [20110912T12:22:03.215Z| info|xen01|188049
    unix-RPC|dispatch:VBD.unplug D:c02fcb41ad59|taskhelper] task
    VBD.unplug R:5d27ceacd0ed
    (uuid:16537a45-d957-ccac-8632-d7cd0ea4107b) created
    (trackid=1533c84883b45b2892655b2656c4b2b1) by task R:5ecc0b721dea
 [20110912T12:22:03.215Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|audit] VBD.unplug: VBD =
    '364b066b-ccaa-b312-f79c-e881da1c8a81'
 [20110912T12:22:03.215Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|xapi] Raised at db_cache_types.ml:75.27-76 ->
    db_cache_types.ml:118.2-40 -> pervasiveext.ml:22.2-9
 [20110912T12:22:03.215Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|backtrace] Raised at xapi_vbd_helpers.ml:185.33-72
    -> message_forwarding.ml:2970.4-64 ->
    message_forwarding.ml:285.6-9
 [20110912T12:22:03.216Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|xapi] Caught exception while DEVICE_ALREADY_DETACHED:
    [ OpaqueRef:51a0f891-852f-6064-041e-43928f448989 ] in message
    forwarder: marking VBD for VBD.unplug
 [20110912T12:22:03.216Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|backtrace] Raised at message_forwarding.ml:288.10-11
    -> threadext.ml:20.23-27 -> threadext.ml:20.65-68 ->
    message_forwarding.ml:152.16-22 ->
    message_forwarding.ml:2975.6-90 ->
    message_forwarding.ml:3035.6-199 -> rbac.ml:230.16-23
 [20110912T12:22:03.216Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|backtrace] Raised at rbac.ml:239.10-15 ->
    server_helpers.ml:77.11-41
 [20110912T12:22:03.216Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|dispatcher] Server_helpers.exec exception_handler:
    Got exception DEVICE_ALREADY_DETACHED: [
    OpaqueRef:51a0f891-852f-6064-041e-43928f448989 ]
 [20110912T12:22:03.216Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|dispatcher] Raised at string.ml:150.25-34 ->
    stringext.ml:108.13-29
 [20110912T12:22:03.216Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|backtrace] Raised at string.ml:150.25-34 ->
    stringext.ml:108.13-29
 [20110912T12:22:03.216Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|xapi] Raised at server_helpers.ml:92.14-15 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.217Z|debug|xen01|188049 unix-RPC|VBD.unplug
    R:5d27ceacd0ed|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.217Z|debug|xen01|188049
    unix-RPC|dispatch:VBD.unplug D:c02fcb41ad59|backtrace] Raised at
    pervasiveext.ml:26.22-25 -> server_helpers.ml:152.10-106 ->
    server.ml:26174.19-167 -> server_helpers.ml:118.4-7
 [20110912T12:22:03.217Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|backtrace] Raised at client.ml:6.37-75 ->
    client.ml:9088.12-63 -> attach_helpers.ml:26.4-41 ->
    helpers.ml:34.35-38
 [20110912T12:22:03.217Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|helpers] Ignoring exception: DEVICE_ALREADY_DETACHED:
    [ OpaqueRef:51a0f891-852f-6064-041e-43928f448989 ] while unplugging
    disk from VM
 [20110912T12:22:03.219Z|debug|xen01|188050
    unix-RPC||dummytaskhelper] task dispatch:VBD.destroy D:c47bd3576d47
    created by task R:5ecc0b721dea
 [20110912T12:22:03.223Z| info|xen01|188050
    unix-RPC|dispatch:VBD.destroy D:c47bd3576d47|taskhelper] task
    VBD.destroy R:95470c39df5c
    (uuid:eb8534b6-f644-d2d5-48af-425491e0f88b) created
    (trackid=1533c84883b45b2892655b2656c4b2b1) by task R:5ecc0b721dea
 [20110912T12:22:03.223Z|debug|xen01|188050 unix-RPC|VBD.destroy
    R:95470c39df5c|audit] VBD.destroy: VBD =
    '364b066b-ccaa-b312-f79c-e881da1c8a81'
 [20110912T12:22:03.223Z|debug|xen01|188050 unix-RPC|VBD.destroy
    R:95470c39df5c|xapi] VBD.destroy (uuid =
    364b066b-ccaa-b312-f79c-e881da1c8a81; ref =
    OpaqueRef:51a0f891-852f-6064-041e-43928f448989)
 [20110912T12:22:03.226Z|debug|xen01|188051
    unix-RPC||dummytaskhelper] task dispatch:session.logout
    D:e33a5c1c5b93 created by task R:5ecc0b721dea
 [20110912T12:22:03.229Z| info|xen01|188051 unix-RPC|session.logout
    D:e6d11b45345d|xapi] Session.destroy
    trackid=1533c84883b45b2892655b2656c4b2b1
 [20110912T12:22:03.229Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.231Z|debug|xen01|188052
    unix-RPC||dummytaskhelper] task dispatch:session.logout
    D:323e65aad0e3 created by task R:5ecc0b721dea
 [20110912T12:22:03.233Z| info|xen01|188052 unix-RPC|session.logout
    D:4fe260a5a195|xapi] Session.destroy
    trackid=b7aea96e08d6354f7b3cb6b2f7b2d024
 [20110912T12:22:03.234Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|backtrace] Raised at pervasiveext.ml:26.22-25 ->
    vmops.ml:272.4-333 -> vmops.ml:1050.8-64
 [20110912T12:22:03.234Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Vmops.start_paused caught:
    SR_BACKEND_FAILURE_65: [ ; Failed to load VDI [opterr=Failed load
    VDI information
    /var/run/sr-mount/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/f2c83f05-ee59-4a49-ad5d-2ecdfd6a8ca1.vhd]; 
    ]: calling domain_destroy
 [20110912T12:22:03.234Z|error|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Memory F 13309484 KiB S 0 KiB T 32766 MiB
 [20110912T12:22:03.235Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] Domain.destroy: all known devices = [  ]
 [20110912T12:22:03.235Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] Domain.destroy calling Xc.domain_destroy
    (domid 18)
 [20110912T12:22:03.235Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] No qemu-dm pid in xenstore; assuming this
    domain was PV
 [20110912T12:22:03.241Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] Domain.destroy: rm /local/domain/18
 [20110912T12:22:03.241Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] Domain.destroy: deleting backend paths
 [20110912T12:22:03.242Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] Xc.domain_getinfo 18 threw: getinfo failed:
    domain 18: domain 18 - vcpu 0 getinfo failed: hypercall 36 fail: 3:
    No such process (ret -1) -- assuming domain nolonger exists
 [20110912T12:22:03.242Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xenops] Xc.domain_getinfo 18 threw: getinfo failed:
    domain 18: getinfo failed: domain 18: domain 18 - vcpu 0 getinfo
    failed: hypercall 36 fail: 3: No such process (ret -1) -- assuming
    domain nolonger exists
 [20110912T12:22:03.242Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at vmops.ml:1136.14-51 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.243Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at pervasiveext.ml:26.22-25 ->
    xc.ml:125.13-17 -> xc.ml:125.56-59 -> xapi_vm.ml:247.1-110
    -> pervasiveext.ml:22.2-9
 [20110912T12:22:03.243Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.243Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|locking_helpers] Released lock on VM
    OpaqueRef:78d1c316-5765-d95a-e2d1-fb0cec16a7c4 with token 49
 [20110912T12:22:03.243Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.243Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at pervasiveext.ml:26.22-25 ->
    message_forwarding.ml:232.27-46 -> pervasiveext.ml:22.2-9
 [20110912T12:22:03.245Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.245Z|debug|xen01|957||thread_queue]
    vm_lifecycle_op: completed processing 1 items: queue = [  ](0)
 [20110912T12:22:03.249Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.249Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.250Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Checking for vdis_reset_and_caching...
 [20110912T12:22:03.250Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Op allowed!
 [20110912T12:22:03.250Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|backtrace] Raised at pervasiveext.ml:26.22-25 ->
    message_forwarding.ml:1063.6-957 -> rbac.ml:230.16-23
 [20110912T12:22:03.251Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|backtrace] Raised at rbac.ml:239.10-15 ->
    server_helpers.ml:77.11-41
 [20110912T12:22:03.251Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|dispatcher] Server_helpers.exec exception_handler:
    Got exception SR_BACKEND_FAILURE_65: [ ; Failed to load VDI
    [opterr=Failed load VDI information
    /var/run/sr-mount/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/f2c83f05-ee59-4a49-ad5d-2ecdfd6a8ca1.vhd]; 
    ]
 [20110912T12:22:03.251Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|dispatcher] Raised at string.ml:150.25-34 ->
    stringext.ml:108.13-29
 [20110912T12:22:03.251Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|backtrace] Raised at string.ml:150.25-34 ->
    stringext.ml:108.13-29
 [20110912T12:22:03.251Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at server_helpers.ml:92.14-15 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.251Z|debug|xen01|188035 unix-RPC|VM.start_on
    R:5ecc0b721dea|xapi] Raised at pervasiveext.ml:26.22-25 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.252Z|debug|xen01|188035
    unix-RPC|dispatch:VM.start_on D:a30b3a88e0cf|backtrace] Raised at
    pervasiveext.ml:26.22-25 -> server_helpers.ml:152.10-106 ->
    server.ml:8033.19-167 -> server_helpers.ml:118.4-7
 [20110912T12:22:03.252Z|debug|xen01|188035 unix-RPC||xapi] Raised at
    client.ml:6.37-75 -> client.ml:4118.12-89 ->
    cli_operations.ml:1670.14-18 -> cli_operations.ml:1715.13-29
    -> cli_operations.ml:1935.7-418 -> xapi_cli.ml:120.18-56 ->
    pervasiveext.ml:22.2-9
 [20110912T12:22:03.255Z| info|xen01|188035 unix-RPC|session.logout
    D:8e5c3a9d5346|xapi] Session.destroy
    trackid=9d2394cc4c3c07700fec4b221c0593c4
 [20110912T12:22:03.255Z|debug|xen01|188035 unix-RPC||backtrace]
    Raised at pervasiveext.ml:26.22-25 -> xapi_cli.ml:119.2-138 ->
    xapi_cli.ml:208.7-44 -> xapi_cli.ml:260.4-23
 [20110912T12:22:03.255Z|debug|xen01|188035 unix-RPC||cli]
    Xapi_cli.exception_handler: Got exception SR_BACKEND_FAILURE_65: [ ;
    Failed to load VDI [opterr=Failed load VDI information
    /var/run/sr-mount/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/f2c83f05-ee59-4a49-ad5d-2ecdfd6a8ca1.vhd]; 
    ]
 [20110912T12:22:03.255Z|debug|xen01|188035 unix-RPC||cli] Raised at
    string.ml:150.25-34 -> stringext.ml:108.13-29
 
 Any assistance would be very much appreciated.
 
 Kindest regards,
 
 Ciaran.
 
 
 
 On 09/09/11 13:47, David Markey wrote:
 I bet your VHD is corrupt.
      
 Can you give a bit more SMlog context, i.e. 10 lines above
        and below? 
 Can you give dmesg output also. 
 
        
          On 9 September 2011 06:32, Grant McWilliams <grantmasterflash@xxxxxxxxx> 
          wrote:
           
            
              
              
              NFS is sideways. Start at the low layers on the host with
            the NFS share and make sure everything is good there first.
            For instance if you have more than one NFS mount or you
            change the mount, or it doesn't mount up etc. you will get
            these errors. I got this when I had an SR connected and I
            shut down NFS, renamed the export and started it back up.
            The SR sat there twitching until I realised what I did.On Mon, Sep 5, 2011 at 9:04 AM,
                Ciaran Kendellen <ciaran@xxxxxxxxxxxxxxx> 
                wrote:
                 
                  Hello there.
 I had cause to force shutdown a vm recently and on
                  attempting to restart it I get :
 
 Error code: SR_BACKEND_FAILURE_65
 Error parameters: , Failed to load VDI [opterr=Failed
                  load VDI information /var/run/sr-mount/0554f47c-1a12-9bd1-ea9b-d2b68984f0ed/e82a1032-5a0c-4ff7-96cf-1f62f32938db.vhd]
 
 The VDI is on a shared NFS repository, other vm's came
                  back up no problem.
 
 Any ideas what to try anyone?
 
 Thanks in advance,
 
 Ciaran.
 
 ___
 
 
 Grant McWilliams
 http://grantmcwilliams.com/
 
 
 
 _______________________________________________
 Xen-users mailing list
 Xen-users@xxxxxxxxxxxxxxxxxxx
 http://lists.xensource.com/xen-users
 
 
 | 
 _______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users 
 |