[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] The vdi is not available



According to:

[25610] 2013-07-25 09:51:46.036917      ***** generic exception: vdi_attach: 
EXCEPTION SR.SROSError, The VDI is not available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]

[16462] 2013-07-25 10:02:49.485672 ['/usr/sbin/td-util', 'query', 'vhd', '-vpf', '/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.vhd']

there is something wrong. It looks it tries to open a .raw file instead of .vhd.

Maybe one of your server has been installed selecting the "thin provisionning" feature and the others servers not ?

As far as I know thin provisioning uses vhd, non thin provisioning uses raw.
So if you have mixed installations that will not work when using a shared storage between them.

My guess is that if you create VM on one server it will create a .vhd image, and on the other a .raw image.
I can't be 100% certain as I've always used thin provisionning.

You maybe can check if you have mixed raw/vhd in /var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/




On 25.07.2013 16:04, Andres E. Moya wrote:
this was trying to start up the vm

[25610] 2013-07-25 09:51:45.997895      lock: acquired 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[25610] 2013-07-25 09:51:46.035698      Raising exception [46, The VDI is not 
available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]]
[25610] 2013-07-25 09:51:46.035831      lock: released 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[25610] 2013-07-25 09:51:46.036917      ***** generic exception: vdi_attach: 
EXCEPTION SR.SROSError, The VDI is not available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]
   File "/opt/xensource/sm/SRCommand.py", line 96, in run
     return self._run_locked(sr)
   File "/opt/xensource/sm/SRCommand.py", line 137, in _run_locked
     target = sr.vdi(self.vdi_uuid)
   File "/opt/xensource/sm/NFSSR", line 213, in vdi
     return NFSFileVDI(self, uuid)
   File "/opt/xensource/sm/VDI.py", line 102, in __init__
     self.load(uuid)
   File "/opt/xensource/sm/FileSR.py", line 370, in load
     opterr="%s not found" % self.path)
   File "/opt/xensource/sm/xs_errors.py", line 49, in __init__
     raise SR.SROSError(errorcode, errormessage)

[25610] 2013-07-25 09:51:46.037204      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr


and this is on a migrate(destination)

[29480] 2013-07-25 09:53:18.859918      lock: acquired 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[29480] 2013-07-25 09:53:18.897479      Raising exception [46, The VDI is not 
available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]]
[29480] 2013-07-25 09:53:18.897609      lock: released 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[29480] 2013-07-25 09:53:18.898701      ***** generic exception: vdi_attach: 
EXCEPTION SR.SROSError, The VDI is not available 
[opterr=/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.raw
 not found]
   File "/opt/xensource/sm/SRCommand.py", line 96, in run
     return self._run_locked(sr)
   File "/opt/xensource/sm/SRCommand.py", line 137, in _run_locked
     target = sr.vdi(self.vdi_uuid)
   File "/opt/xensource/sm/NFSSR", line 213, in vdi
     return NFSFileVDI(self, uuid)
   File "/opt/xensource/sm/VDI.py", line 102, in __init__
     self.load(uuid)
   File "/opt/xensource/sm/FileSR.py", line 370, in load
     opterr="%s not found" % self.path)
   File "/opt/xensource/sm/xs_errors.py", line 49, in __init__
     raise SR.SROSError(errorcode, errormessage)

[29480] 2013-07-25 09:53:18.898972      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr

this is on migrate (source)

[16462] 2013-07-25 10:02:48.800862      blktap2.deactivate
[16462] 2013-07-25 10:02:48.800965      lock: acquired 
/var/lock/sm/72ad514a-f1f8-4a34-9907-9c6a3506520b/vdi
[16462] 2013-07-25 10:02:48.819441      ['/usr/sbin/tap-ctl', 'close', '-p', 
'5578', '-m', '7']
[16462] 2013-07-25 10:02:49.295250       = 0
[16462] 2013-07-25 10:02:49.295467      ['/usr/sbin/tap-ctl', 'detach', '-p', 
'5578', '-m', '7']
[16462] 2013-07-25 10:02:49.299579       = 0
[16462] 2013-07-25 10:02:49.299794      ['/usr/sbin/tap-ctl', 'free', '-m', '7']
[16462] 2013-07-25 10:02:49.303645       = 0
[16462] 2013-07-25 10:02:49.303902      tap.deactivate: Shut down 
Tapdisk(vhd:/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.vhd,
 pid=5578, minor=7, state=R)
[16462] 2013-07-25 10:02:49.485672      ['/usr/sbin/td-util', 'query', 'vhd', 
'-vpf', 
'/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.vhd']
[16462] 2013-07-25 10:02:49.510929        pread SUCCESS
[16462] 2013-07-25 10:02:49.537296      Removed host key 
host_OpaqueRef:645996e3-d9cc-59e1-3842-65d679e9e080 for 
72ad514a-f1f8-4a34-9907-9c6a3506520b
[16462] 2013-07-25 10:02:49.537451      lock: released 
/var/lock/sm/72ad514a-f1f8-4a34-9907-9c6a3506520b/vdi
[16462] 2013-07-25 10:02:49.537540      lock: closed 
/var/lock/sm/72ad514a-f1f8-4a34-9907-9c6a3506520b/vdi
[16462] 2013-07-25 10:02:49.537641      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[16462] 2013-07-25 10:02:49.537862      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[16636] 2013-07-25 10:02:50.103352      lock: acquired 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[16636] 2013-07-25 10:02:50.117961      ['/usr/sbin/td-util', 'query', 'vhd', 
'-vpf', 
'/var/run/sr-mount/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/72ad514a-f1f8-4a34-9907-9c6a3506520b.vhd']
[16636] 2013-07-25 10:02:50.137963        pread SUCCESS
[16636] 2013-07-25 10:02:50.139106      vdi_detach {'sr_uuid': 
'9f9aa794-86c0-9c36-a99d-1e5fdc14a206', 'subtask_of': 
'DummyRef:|ebe0d00f-b082-77ba-b209-095e71a0c1c7|VDI.detach', 'vdi_ref': 
'OpaqueRef:31009428-3c98-c005-67ed-ddcc5e432e03', 'vdi_on_boot': 'persist', 
'args': [], 'vdi_location': '72ad514a-f1f8-4a34-9907-9c6a3506520b', 'host_ref': 
'OpaqueRef:645996e3-d9cc-59e1-3842-65d679e9e080', 'session_ref': 
'OpaqueRef:f4170801-402a-0935-a759-19a46e700a87', 'device_config': {'server': 
'10.254.253.9', 'SRmaster': 'true', 'serverpath': '/xen', 'options': ''}, 
'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 
'OpaqueRef:fefba283-7462-1f5a-b4e2-d58169c4b318', 'vdi_uuid': 
'72ad514a-f1f8-4a34-9907-9c6a3506520b'}
[16636] 2013-07-25 10:02:50.139415      lock: closed 
/var/lock/sm/72ad514a-f1f8-4a34-9907-9c6a3506520b/vdi
[16636] 2013-07-25 10:02:50.139520      lock: released 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[16636] 2013-07-25 10:02:50.139779      lock: closed 
/var/lock/sm/9f9aa794-86c0-9c36-a99d-1e5fdc14a206/sr
[17886] 2013-07-25 10:03:16.326423      sr_scan {'sr_uuid': 
'fc63fc27-89ca-dbc8-228d-27e3c74779bb', 'subtask_of': 
'DummyRef:|2f34582a-2b3b-82be-b1f6-7f374565c8e8|SR.scan', 'args': [], 
'host_ref': 'OpaqueRef:645996e3-d9cc-59e1-3842-65d679e9e080', 'session_ref': 
'OpaqueRef:bfffd224-6edc-1cb4-9145-c0c95cbb063b', 'device_config': {'iso_path': 
'/iso', 'type': 'cifs', 'SRmaster': 'true', 'location': 
'//10.254.254.30/share'}, 'command': 'sr_scan', 'sr_ref': 
'OpaqueRef:9c7f5cd0-fd88-16e2-2426-6e066a1183ab'}


----- Original Message -----
From: "SÃÂbastien RICCIO" <sr@xxxxxxxxxxxxxxx>
To: "Andres E. Moya" <amoya@xxxxxxxxxxxxxxxxx>, "Alberto Castrillo" 
<castrillo@xxxxxxxxxx>
Cc: "xen-api" <xen-api@xxxxxxxxxxxxx>
Sent: Wednesday, July 24, 2013 10:55:40 PM
Subject: Re: [Xen-API] The vdi is not available

Hi,

When this happens, what does /var/log/SMlog says ?

Can you please tail -f /var/log/SMlog on both source and destination,
try to migrate the VM and paste the results?

Cheers,
SÃÂbastien

On 24.07.2013 23:09, Andres E. Moya wrote:
I also just tried creating a new storage repository moving the vdi to the new 
storage repository is successful, i then try to migrate it to server C and 
still have the same issue

Moya Solutions, Inc.
amoya@xxxxxxxxxxxxxxxxx
0 | 646-918-5238 x 102
F | 646-390-1806

----- Original Message -----
From: "Alberto Castrillo" <castrillo@xxxxxxxxxx>
To: "xen-api" <xen-api@xxxxxxxxxxxxx>
Sent: Wednesday, July 24, 2013 4:12:13 PM
Subject: Re: [Xen-API] The vdi is not available



We use NFS as shared storage, and have faced some "VDI not available" issues 
with our VMs. I haven't been able to start a VM with the method of that URL in XCP 1.6 
(in 1.1 and 1.5 beta worked). What worked for me:


- Detach the VDI from the VM
- Detach and forget the SR where the VDI is stored
- Reattach the forgotten SR (create new SR, give the same info that the 
detached SR, re-use the SR-UUID, ...)
- Reattach the VDI to the VM




El 24/07/2013, a las 21:10, hook escribiÃÂ:



Past weekend (as usual O_o) we have experienced the issue in our XCP 1.6 
production pool.
Shared iSCSI storage was shutted down due to misconfigured UPS settings while 
XCP servers continued to work.


When storage was returned to working state and reconnected to pool most VM did 
not boot with the same message - VDI is not available.
Googling give me mentioned above method - forgot and reconnect VDI.
Result was even worser - the whole SR become unusable.
Storage rescan gazered lot of errors like bad header on LVM and many other.


Finally i've disconnect failed SR from pool, connect it back and SR become 
healthy (it looks so). But anyone VM was not start with disk from this SR and 
freeze during startup.
I did not find solution and restored most VMs from backup (long live VMPP!)


So, i just wanna say - be highly careful with VDI on shared storage repository 
in production environment)





2013/7/24 Brian Menges < bmenges@xxxxxxxxxx >


Have you tried the following?: 
http://community.spiceworks.com/how_to/show/14199-xcp-xen-cloud-platform-xenserver-the-vdi-is-not-available

- Brian Menges
Principal Engineer, DevOps
GoGrid | ServePath | ColoServe | UpStream Networks


-----Original Message-----
From: xen-api-bounces@xxxxxxxxxxxxx [mailto: xen-api-bounces@xxxxxxxxxxxxx ] On 
Behalf Of Andres E. Moya
Sent: Wednesday, July 24, 2013 09:32
To: xen-api@xxxxxxxxxxxxx
Subject: [Xen-API] The vdi is not available

Guys need help trouble shooting this issue

I have an xcp 1.6 pool with 3 machines A,B, and C

I can migrate from A to B and B to A

WE cannot migrate from A or B to C, we also cannot shutdown a vm and start it 
up on C, when we do that we get the message The vdi is not available.

We have tried removing machine C from the pool and re joining and still have 
the issue.

when we first add host C to the pool it cannot load the nfs storage repository 
because we need to create a management interface from a bonded vlan that gets 
created after joining the pool. After we create the interface and run a re plug 
on the storage repository it says its connected / re plugged.

Thanks for any help in advance


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

________________________________

The information contained in this message, and any attachments, may contain 
confidential and legally privileged material. It is solely for the use of the 
person or entity to which it is addressed. Any review, retransmission, 
dissemination, or action taken in reliance upon this information by persons or 
entities other than the intended recipient is prohibited. If you receive this 
in error, please contact the sender and delete the material from any computer.

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api








_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.