[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Storage driver domain for HVM guest



Hello,

I am running Xen 4.6.0, with Ubuntu 14.04 as my Domain-0.

I am trying to set up a storage driver domain to serve disks to a
variety of DomU's (for now, all guest OSes are also Ubuntu 14.04). I
followed the instructions on the following Wiki page to set up the
storage driver domain:

> http://wiki.xenproject.org/wiki/Storage_driver_domains

This works fine for a PV guest. After adding "backend=StorageDom" in
the disk spec, the guest is able to boot from a disk provided by the
storage driver domain.

I am wondering if it is also possible for a storage driver domain to
provide disks to HVM (or PVHVM) guests. At the bottom of the Wiki
page, there is a note indicating that it isn't possible (yet?):

> It is not possible to use driver domains with pygrub or HVM guests yet, so it 
> will only work with PV guests that have the kernel in Dom0.

Some time has passed since that page was last modified, so I decided
to try it. Here is the config file of my PVHVM guest:

> name = "ClientDom"
> memory = 1024
> maxmem = 1024
> vcpus = 2
> maxvcpus = 2
> vnc = 1
> vnclisten = "0.0.0.0"
> vncdisplay = 1
> builder = "hvm"
> xen_platform_pci = 1
> device_model_version = "qemu-xen"
> disk = [ "format=raw,vdev=xvda,access=rw,backend=StorageDom,target=/dev/sdb" ]
> boot = "c"

The disk target /dev/sdb is relative to the storage driver domain, on
a passthrough PCI device. That disk has three partitions: /boot
(bootable flag set), / (root), and swap. However, after creating the
guest, the BIOS (SeaBIOS for qemu-xen, but I also tried with
qemu-xen-traditional with similar results) complains that no bootable
device is found:

> Booting from Hard Disk...
> Boot failed: not a bootable disk
> [ .... ]
> No bootable device. Retrying in 60 seconds.

In xenstore, the disk backend/frontend appear to be setup correctly:

> user@ubuntu ~> $ sudo xenstore-ls /local/domain/1/backend
> vbd = ""
>  2 = ""
>   51712 = ""
>    frontend = "/local/domain/2/device/vbd/51712"
>    params = "/dev/sdb"
>    script = "/etc/xen/scripts/block"
>    frontend-id = "2"
>    online = "1"
>    removable = "0"
>    bootable = "1"
>    state = "2"
>    dev = "xvda"
>    type = "phy"
>    mode = "w"
>    device-type = "disk"
>    discard-enable = "1"
>    physical-device = "8:10"
>    hotplug-status = "connected"

> user@ubuntu ~> $ sudo xenstore-ls /local/domain/2/device
> suspend = ""
>  event-channel = ""
> vbd = ""
>  51712 = ""
>   backend = "/local/domain/1/backend/vbd/2/51712"
>   backend-id = "1"
>   state = "1"
>   virtual-device = "51712"
>   device-type = "disk"
> vkbd = ""
>  0 = ""
>   backend = "/local/domain/0/backend/vkbd/2/0"
>   backend-id = "0"
>   state = "1"

Here is the verbose output of the "xl create" command:

> user@ubuntu ~> $ sudo xl -vvv create /etc/xen/ClientDom.cfg
> Parsing config from /etc/xen/ClientDom.cfg
> libxl: debug: libxl_create.c:1568:do_domain_create: ao 0x1df8230: create: 
> how=(nil) callback=(nil) poller=0x1dee440
> libxl: debug: libxl_device.c:269:libxl__device_disk_set_backend: Disk 
> vdev=xvda spec.backend=unknown
> libxl: debug: libxl_device.c:201:disk_try_backend: Disk vdev=xvda, is using a 
> storage driver domain, skipping physical device check
> libxl: debug: libxl_device.c:298:libxl__device_disk_set_backend: Disk 
> vdev=xvda, using backend phy
> libxl: debug: libxl_create.c:954:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:324:libxl__bootloader_run: not a PV domain, 
> skipping bootloader
> libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch 
> w=0x1deede8: deregister unregistered
> libxl: debug: libxl_numa.c:483:libxl__get_numa_candidate: New best NUMA 
> placement candidate found: nr_nodes=1, nr_cpus=24, nr_vcpus=4, 
> free_memkb=255993
> libxl: debug: libxl_numa.c:483:libxl__get_numa_candidate: New best NUMA 
> placement candidate found: nr_nodes=1, nr_cpus=24, nr_vcpus=4, 
> free_memkb=256242
> libxl: detail: libxl_dom.c:181:numa_place_domain: NUMA placement candidate 
> with 1 nodes, 24 cpus and 256242 KB free selected
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xbad04
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1bad04
> xc: detail: VIRTUAL MEMORY ARRANGEMENT:
> xc: detail:   Loader:   0000000000100000->00000000001bad04
> xc: detail:   Modules:  0000000000000000->0000000000000000
> xc: detail:   TOTAL:    0000000000000000->000000003f800000
> xc: detail:   ENTRY:    0000000000100610
> xc: detail: PHYSICAL MEMORY ALLOCATION:
> xc: detail:   4KB PAGES: 0x0000000000000200
> xc: detail:   2MB PAGES: 0x00000000000001fb
> xc: detail:   1GB PAGES: 0x0000000000000000
> xc: detail: elf_load_binary: phdr 0 at 0x7fc11e76e000 -> 0x7fc11e81f151
> domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff000
> libxl: debug: libxl_device.c:269:libxl__device_disk_set_backend: Disk 
> vdev=xvda spec.backend=phy
> libxl: debug: libxl_device.c:201:disk_try_backend: Disk vdev=xvda, is using a 
> storage driver domain, skipping physical device check
> libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=0x1df0880 
> wpath=/local/domain/1/backend/vbd/2/51712/state token=3/0: register slotnum=3
> libxl: debug: libxl_create.c:1591:do_domain_create: ao 0x1df8230: inprogress: 
> poller=0x1dee440, flags=i
> libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x1df0880 
> wpath=/local/domain/1/backend/vbd/2/51712/state token=3/0: event 
> epath=/local/domain/1/backend/  vbd/2/51712/state
> libxl: debug: libxl_event.c:884:devstate_callback: backend 
> /local/domain/1/backend/vbd/2/51712/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:576:watchfd_callback: watch w=0x1df0880 
> wpath=/local/domain/1/backend/vbd/2/51712/state token=3/0: event 
> epath=/local/domain/1/backend/  vbd/2/51712/state
> libxl: debug: libxl_event.c:880:devstate_callback: backend 
> /local/domain/1/backend/vbd/2/51712/state wanted state 2 ok
> libxl: debug: libxl_event.c:677:libxl__ev_xswatch_deregister: watch 
> w=0x1df0880 wpath=/local/domain/1/backend/vbd/2/51712/state token=3/0: 
> deregister slotnum=3
> libxl: debug: libxl_device.c:937:device_backend_callback: calling 
> device_backend_cleanup
> libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch 
> w=0x1df0880: deregister unregistered
> libxl: debug: libxl_device.c:992:device_hotplug: Backend domid 1, domid 0, 
> assuming driver domains
> libxl: debug: libxl_device.c:995:device_hotplug: Not a remove, not executing 
> hotplug scripts
> libxl: debug: libxl_event.c:691:libxl__ev_xswatch_deregister: watch 
> w=0x1df0980: deregister unregistered
> libxl: debug: libxl_dm.c:1778:libxl__spawn_local_dm: Spawning device-model 
> /usr/local/lib/xen/bin/qemu-system-i386 with arguments:
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   
> /usr/local/lib/xen/bin/qemu-system-i386
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -xen-domid
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   2
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -chardev
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -no-shutdown
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -mon
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   
> chardev=libxl-cmd,mode=control
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -chardev
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   
> socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-2,server,nowait
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -mon
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   
> chardev=libxenstat-cmd,mode=control
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -nodefaults
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -name
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   ClientDom
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -vnc
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   0.0.0.0:1,to=99
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -display
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   none
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   cirrus-vga,vgamem_mb=8
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -boot
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   order=c
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -smp
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   2,maxcpus=2
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -net
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   none
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -machine
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   xenfv
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -m
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   1016
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1780:libxl__spawn_local_dm:   
> file=/dev/sdb,if=ide,index=0,media=disk,format=raw,cache=writeback
> [ .... ]

I'm not sure what qemu's role here is, but that last line with
"file=/dev/sdb..." makes no mention of a backend, so I thought it
might be picking up /dev/sdb relative to Domain-0, instead of the
storage driver domain. To test this theory, I created a partition
table on Domain-0's /dev/sdb with a bootable partition at /dev/sdb1.
Indeed, after creating the guest, I get to the GRUB2 menu that was
installed on Domain-0's /dev/sdb (despite still having StorageDom
configured as the backend). Upon selecting a boot entry, the guest OS
begins to boot but drops into BusyBox because it can't find the
boot/root/swap partitions (because Domain-0's /dev/sdb partitions and
StorageDom's /dev/sdb partitions have different UUIDs).

So, in short, it seems like the BIOS sees a disk that exists on
Domain-0, and then once the guest kernel and guest OS take over, they
see the disk provided by the storage driver domain.

If anyone has experience getting a storage driver domain to serve
disks that HVM/PVHVM guests can use to boot, or can spot any mistakes
I've made, (or can tell me it's not possible yet), I'd be very
grateful for advice.

Thanks,
Alex

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.