| 
    
 [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Storage related issues || Cannot export/import/clone etc..
 On Mon, Aug 22, 2011 at 22:33, Errol Neal <eneal@xxxxxxxxxxxxxxxxx> wrote: > <br><div style="font-family: Arial; font-size: 10pt;">Hi. I'm trying to > create a storage repository on debian squeeze. I built the deb packages using > the xapi-autobuilder. <br>When creating the SR:<br><br><a > href="mailto:root@squeeze" > target="_blank">root@squeeze</a>:/run/sr-mount/ef8d1218-5d17-0b80-2aca-dbdcb5054212/xapi-autobuilder/tmp-debs# > xe sr-create host-uuid=9e54c94f-f78a-8885-a807-85db5beb0da6 name-label=test > device-config:device=/dev/sdb content-type=user type=lvm<br><br><br>The > resulting error is:<br><br>Error code: SR_BACKEND_FAILURE_53<br>Error > parameters: , Logical Volume unmount/deactivate error [opterr=errno is > 3],<br><br>The volume group is created non-the-less so i'm a bit confused. > From my google searches, I got the idea that I could use the > sr-introduce:<br><br>xe sr-introduce... > <br><br>xe-pbd-create...<br><br>xe-pbd-plug..<br><br>But even after that, > it's still not usable. I can create a VM, but for example, if I try to export > it..<br><br>root@squeeze:~# xe vm-export > vm=9e69f6d3-cf08-0bcd-2d82-0f52aa567552 > filename=9e69f6d3-cf08-0bcd-2d82-0f52aa567552.vhd<br><br>I receive an > error:<br><br>There was an SR backend failure.<br>status: non-zero > exit<br>stdout:<br>stderr: Traceback (most recent call last):<br>  File > "/usr/lib/xen-common/xapi/sm/LVMSR", line 1959, in > <module><br>    SRCommand.run(LVHDSR, > DRIVER_INFO)<br>  File "/usr/lib/xen-common/xapi/sm/SRCommand.py", line > 252, in run<br>    ret = cmd.run(sr)<br>  File > "/usr/lib/xen-common/xapi/sm/SRCommand.py", line 94, in > run<br>    return self._run_locked(sr)<br>  File > "/usr/lib/xen-common/xapi/sm/SRCommand.py", line 131, in > _run_locked<br>    return self._run(sr, target)<br>  File > "/usr/lib/xen-common/xapi/sm/SRCommand.py", line 185, in > _run<br>    writable, caching_params)<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 1282, in > activate<br>    writable, caching_params):<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 32, in > wrapper<br>    ret = op(self, *args)<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 1314, in > _activate_locked<br>    dev_path = self._activate(sr_uuid, > vdi_uuid, writable, caching_params)<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 1334, in > _activate<br>    dev_path = self._tap_activate(phy_path, > vdi_type, sr_uuid, writable)<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 1126, in > _tap_activate<br>    tapdisk = > Tapdisk.find_by_path(phy_path)<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 594, in > find_by_path<br>    return cls.find(path=path)<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 582, in > find<br>    found = list(cls.list(**args))<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 557, in > list<br>    for row in TapCtl.list(**args):<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 67, in > loop<br>    return f(*__t, **__d)<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 271, in > list<br>    return list(cls.__list(**args))<br>  File > "/usr/lib/xen-common/xapi/sm/blktap2.py", line 247, in > __list<br>    key, val = field.split('=')<br>ValueError: need > more than 1 value to unpack<br><br>Any thoughts on how I can move past these > errors?<br><br></div> You may want to try sending an actual plain text email to the list, rather than the above HTML gibberish in a plain text wrapper ;) --        Â Please keep list traffic on the list. Rob MacGregor    Whoever fights monsters should see to it that in the process he     doesn't become a monster.         Friedrich Nietzsche _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users 
 
  | 
  
![]()  | 
            
         Lists.xenproject.org is hosted with RackSpace, monitoring our  |