[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] virtio-mmio: add xenbus probing



Le 30/04/2026 à 06:06, Val Packett a écrit :
> 
> On 4/29/26 11:41 AM, Teddy Astie wrote:
>> Hello,
>>
>> Le 29/04/2026 à 16:18, Val Packett a écrit :
>>> […]
>>>
>>> I've been working on porting virtio-mmio support from Arm to x86_64,
>>> with the goal of running vhost-user-gpu to power Wayland/GPU integration
>>> for Qubes OS. (I'm aware of various proposals for alternative virtio
>>> transports but virtio-mmio seems to be the only one that *is* upstream
>>> already and just Works..) Setting up virtio-mmio through xenbus, 
>>> initially
>>> motivated just by event channels being the only real way to get 
>>> interrupts
>>> working on HVM, turned out to generally be quite pleasant and nice :)
>> Is it HVM specific, or can we also make it work for PVH (we can actually
>> attach a ioreq server to PVH guests) ?
> 
> Sorry, typo, I did mean PVH of course!
> 
> I've been testing this with PVH guests + PV dom0, with my PV alloc_ioreq 
> fix:
> https://lore.kernel.org/all/20251126062124.117425-1- 
> val@xxxxxxxxxxxxxxxxxxxxxx/
> 
> (Time to resend that one as a non-RFC I guess…)
> 
> HVM actually does have legacy ISA interrupts (which are often used with 
> virtio-mmio on KVM), funnily enough, and I've tried firing those from a 
> DMOP but that silly thing didn't work properly.
> 
>>> I'd like to get some early feedback for this patch, particularly
>>> the general stuff:
>>>
>>> * is this whole thing acceptable in general?
>>> * should it be extracted into a different file?
>>> * (from the Xen side) any input on the xenstore keys, what goes where?
>>> * anything else to keep in mind?
>>>
>>> It does seem simple enough, so hopefully this can be done?
>>>
>>> The corresponding userspace-side WIP is available at:
>>> https://github.com/QubesOS/xen-vhost-frontend
>>>
>>> And the required DMOP for firing the evtchn events will be sent
>>> to xen-devel shortly as well.
>> Could that be done through evtchn_send (or its userland counterpart) ?
> 
> Actually, yes… The use of DMOPs is only dictated by the current Linux 
> privcmd.c code (the irqfds created by the kernel react to events by 
> executing HYPERVISOR_dm_op with a stored operation), we can avoid the 
> need to modify Xen by simply expanding the privcmd driver to make 
> "evtchn fds". Sounds good, will do.
> 

Given that the event channel used by device models is exposed through 
ioreq.vp_eport ("evtchn for notifications to/from device model"). I 
don't think you need to expand the privcmd interface, and you should be 
able to do this instead :

open /dev/xen/evtchn
perform IOCTL_EVTCHN_BIND_INTERDOMAIN (for each guest vCPU)
   with remote_domain=guest_domid, remote_port=ioreq.vp_eport

Then interact with the event channel through IOCTL_EVTCHN_NOTIFY (with 
local port given by IOCTL_EVTCHN_BIND_INTERDOMAIN) and read/write on the 
file descriptor.

I have some experimental Rust code to work with event channels [1], but 
I think you can find similar code in multiples places.

[1]
https://github.com/TSnake41/rust-vmm-xen/blob/redesign-proposal/xen/src/event/mod.rs
https://github.com/TSnake41/rust-vmm-xen/blob/redesign-proposal/xen-unix/src/event/mod.rs

>>> [..]
>>>
>>> diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
>>> index ce5bc0d9ea28..56bc2b10526b 100644
>>> --- a/drivers/virtio/Kconfig
>>> +++ b/drivers/virtio/Kconfig
>>> @@ -171,6 +171,13 @@ config VIRTIO_MMIO_CMDLINE_DEVICES
>>>         If unsure, say 'N'.
>>> +config VIRTIO_MMIO_XENBUS
>>> +    bool "Memory mapped virtio devices parameter parsing"
>> that text seems to miss the xenbus aspect
> Yep, didn't change that yet, ack
>>> [..]
>> In some way, we're defining a new "PV driver" which is a virtio-mmio
>> one, I guess we can eventually specific some form of protocol that
>> backend/frontend would need to follow ?
> 
> Right, Jürgen mentioned documenting the keys in the xenstore-paths doc.. 
> would the entire "protocol" (keys + state transition logic) fit into that?
> 
> The keys are currently derived from the initial Arm prototype which 
> wasn't actually using xenbus properly (the guest driver was configured 
> by a device tree node, but the ioreq server used xenstore keys, without 
> properly transitioning between states).
> 
> 
> Thanks,
> ~val
> 
> 

Teddy


--
Teddy Astie | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.