[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] can't boot from iso on cifs mount
After switching from xm to xl i found that all my windows hvm domU can't boot from iso. domU can't boot is iso on cifs mount point. If that iso move from cifs to ramdisk - all work's fine. cifs mount: //cc/public on /var/storage type cifs (rw,relatime,vers=1.0,sec=ntlm,cache=none,unc=\\cc\public,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.25.254,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1) xl messages contains: Using file /dev/disk/vbd/21-822 in read-write mode Strip off blktap sub-type prefix to /var/storage/iso/SW_DVD5_Windows_Svr_DC_EE_SE_Web_2008_R2_64Bit_Russian_w_SP1_MLF_X17-22616_vase.iso (drv 'aio') Using file /var/storage/iso/SW_DVD5_Windows_Svr_DC_EE_SE_Web_2008_R2_64Bit_Russian_w_SP1_MLF_X17-22616_vase.iso in read-only mode qemu: could not open vbd '/local/domain/0/backend/qdisk/162/5632/mode' or hard disk image '/var/storage/iso/SW_DVD5_Windows_Svr_DC_EE_SE_Web_2008_R2_64Bit_Ru ssian_w_SP1_MLF_X17-22616_vase.iso' (drv 'aio' format 'raw') domain started with xl create /etc/empty \ -d \ name="21-10824" \ kernel="/usr/lib/xen/boot/hvmloader" \ builder="hvm" \ memory=768 \ vcpus=4 \ vif=["mac=00:16:3e:00:1a:e4,ip=62.76.190.208,type=paravirtualised"] disk=["phy:/dev/disk/vbd/21-822,hda,w", "file:/var/storage/iso/SW_DVD5_Windows_Svr_DC_EE_SE_Web_2008_R2_64Bit_Russian_w_SP1_MLF_X17-22616_vase.iso,hdc:cdrom,r", "file:/var/storage/iso/winpe_amd64.iso,hdb,r,devtype=cdrom"] \ device_model="qemu-dm" \ boot="d" \ vnc=1 \ vnclisten="0.0.0.0" \ vncconsole=1 \ viridian=1 \ stdvga=1 \ videoram=16 \ localtime=1 \ vncpasswd="XYhTN4A4OU" \ serial="pty" \ xen_platform_pci=1 \ usbdevice="tablet" \ keymap="en-us" \ on_restart="destroy" \ boot="d"' -- Vasiliy Tolstov, Clodo.ru e-mail: v.tolstov@xxxxxxxxx jabber: vase@xxxxxxxxx _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |