[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Revisit: HVM on storage driver domain



On Wed, Jan 5, 2022 at 9:11 AM G.R. <firemeteor@xxxxxxxxxxxxxxxxxxxxx> wrote:
>
>
>
> On Wed, Jan 5, 2022, 01:00 Kuba <kuba.0000@xxxxx> wrote:
>>
>> > Revisiting a failed attempt 5 years ago but still don't have the luck :-(
>> >
>> > >> BTW, an irrelevant question:
>> > >> What's the current status of HVM domU on top of storage driver domain?
>> > >> About 7 years ago, one user on the list was able to get this setup up
>> > >> and running with your help (patch).[1]
>> > >> When I attempted to reproduce a similar setup two years later, I
>> > >> discovered that the patch was not submitted.
>> > >> And even with that patch the setup cannot be reproduced successfully.
>> > >> We spent some time debugging on the problem together[2], but didn't
>> > >> bottom out the root cause at that time.
>> > >> In case it's still broken and you still have the interest and time, I
>> > >> can launch a separate thread on this topic and provide required
>> > >> testing environment.
>> > >
>> > >  Yes, better as a new thread please.
>> > >
>> > > FWIW, I haven't looked at this since a long time, but I recall some
>> > > fixes in order to be able to use driver domains with HVM guests, which
>> > > require attaching the disk to dom0 in order for the device model
>> > > (QEMU) to access it.
>> > >
>> > > I would give it a try without using stubdomains and see what you get.
>> > > You will need to run `xl devd` inside of the driver domain, so you
>> > > will need to install xen-tools on the domU. There's an init script to
>> > > launch `xl devd` at boot, it's called 'xendriverdomain'.
>> > >
>> > For this testing purpose, I did the following:
>> > 1. downloaded an official FreeBSD 12.2 VM image as a domU
>> > 2. Install the xen-tools package and enable the devd daemon
>> > 3. reboot the domU with driver_domain=1, mount the guest image from NAS
>> > 4. instance a new domU with the following config:
>> >      device_model_stubdomain_override=0
>> >      disk =
>> > ['backend=freebsd12,/mnt/vmfs/Windows/ruibox/ruibox.img,raw,xvda,w']
>> > 5. When boot the new domU, see the following warning and the firmware
>> > cannot boot the disk:
>> >        libxl: warning:
>> > libxl_dm.c:1888:libxl__build_device_model_args_new: Domain 17:No way
>> > to get local access disk to image: xvda
>> >        Disk will be available via PV drivers but not as anemulated disk.
>> > 6. When boot with stubdomain_override=1, it fails differently:
>> >     libxl: error: libxl_dm.c:2332:libxl__spawn_stub_dm: Domain
>> > 79:could not access stubdomain kernel
>> > /usr/lib/xen-4.14/boot/ioemu-stubdom.gz: No such file or directory
>> >     libxl: error: libxl_dm.c:2724:stubdom_pvqemu_cb: Domain 79:error
>> > connecting nics devices: No such file or directory
>> > Looks like the XEN 4.14 package from Debian does not have the stubdom
>> > feature available.
>> > I wonder if it's just a single file that I can grab from somewhere or
>> > if I need to build xen from the source to get this stubdom work?
>> >
>> > Anything obviously wrong in my setup, or suggestion on diagnosis, Roger?
>> >
>> > Thanks,
>> > G.R.
>> >
>> > > Thanks, Roger.
>> > >
>> > >> [1]
>> > https://lists.xenproject.org/archives/html/xen-users/2014-08/msg00003.html
>> > >> [2]
>> > https://xen-users.narkive.com/9ihP0QG4/hvm-domu-on-storage-driver-domain
>>
>> Hi,
>>
>> I remember using Roger's patch quite successfully not that long ago with
>> some 4.10+ Xen (it even might have been 4.14, but I am not sure).
>>
>> It requires compiling Xen from source, but as a result you get a deb
>> package. After that all you need is to use the config like in the [1]
>> link above and it "just works".
>
> Hi Kuba,
>
> Thanks for the info.
> It's surprising to see the same patch from 7 years ago still working and 
> required. I can give it a try since I had the experience of building xen from 
> source. I switched to the distro package just because the need for custom 
> compiling seemed gone...
>
> To double check, the config you are talking about is with stubdom enabled, 
> right? I suppose upstream qemu is good enough?
I rebuilt a 4.14.3 Xen version from source with Roger's patch.
Now I see a different failure log (with stubdom and qemu-traditional):

Parsing config from /etc/xen/domains/new_inst.cfg
libxl: error: libxl_dm.c:2773:stubdom_xswait_cb: Domain 11:Stubdom 12
for 11 startup: startup timed out
libxl: error: libxl_create.c:1832:domcreate_devmodel_started: Domain
11:device model did not start: -9
libxl: error: libxl_device.c:1103:device_backend_callback: Domain
12:unable to remove device with path
/local/domain/5/backend/vbd/12/51712
libxl: error: libxl_domain.c:1529:devices_destroy_cb: Domain
12:libxl__devices_destroy failed
libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain
11:Non-existant domain
libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
11:Unable to destroy guest
libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
11:Destruction of domain failed

The stubdom fails to startup and according to the log it probably
relates to the disk -- there is no obvious error message but the last
few lines are disk related...

xs_daemon_open -> 4, 0x14d648
Using xvda for guest's hda
******************* BLKFRONT for /local/domain/12/device/vbd/51712 **********


backend at /local/domain/5/backend/vbd/12/51712

Unlucky but I think I'll need some debugging from here.
Any suggestions, Roger?

>
>> I don't guarantee anything, but I think it is worth to try.
>>
>> Regards,
>> Kuba
>>



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.