[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] virtio-mmio: add xenbus probing



Le 30/04/2026 à 10:51, Val Packett a écrit :
> 
> On 4/30/26 5:11 AM, Teddy Astie wrote:
>> Le 30/04/2026 à 06:06, Val Packett a écrit :
>>> On 4/29/26 11:41 AM, Teddy Astie wrote:
>>>> Hello,
>>>>
>>>> Le 29/04/2026 à 16:18, Val Packett a écrit :
>>>>> […]
>>>>>
>>>>> I've been working on porting virtio-mmio support from Arm to x86_64,
>>>>> with the goal of running vhost-user-gpu to power Wayland/GPU 
>>>>> integration
>>>>> for Qubes OS. (I'm aware of various proposals for alternative virtio
>>>>> transports but virtio-mmio seems to be the only one that *is* upstream
>>>>> already and just Works..) Setting up virtio-mmio through xenbus,
>>>>> initially
>>>>> motivated just by event channels being the only real way to get
>>>>> interrupts
>>>>> working on HVM, turned out to generally be quite pleasant and nice :)
>>>> Is it HVM specific, or can we also make it work for PVH (we can 
>>>> actually
>>>> attach a ioreq server to PVH guests) ?
>>> Sorry, typo, I did mean PVH of course!
>>>
>>> I've been testing this with PVH guests + PV dom0, with my PV alloc_ioreq
>>> fix:
>>> https://lore.kernel.org/all/20251126062124.117425-1-
>>> val@xxxxxxxxxxxxxxxxxxxxxx/
>>>
>>> (Time to resend that one as a non-RFC I guess…)
>>>
>>> HVM actually does have legacy ISA interrupts (which are often used with
>>> virtio-mmio on KVM), funnily enough, and I've tried firing those from a
>>> DMOP but that silly thing didn't work properly.
>>>
>>>>> I'd like to get some early feedback for this patch, particularly
>>>>> the general stuff:
>>>>>
>>>>> * is this whole thing acceptable in general?
>>>>> * should it be extracted into a different file?
>>>>> * (from the Xen side) any input on the xenstore keys, what goes where?
>>>>> * anything else to keep in mind?
>>>>>
>>>>> It does seem simple enough, so hopefully this can be done?
>>>>>
>>>>> The corresponding userspace-side WIP is available at:
>>>>> https://github.com/QubesOS/xen-vhost-frontend
>>>>>
>>>>> And the required DMOP for firing the evtchn events will be sent
>>>>> to xen-devel shortly as well.
>>>> Could that be done through evtchn_send (or its userland counterpart) ?
>>> Actually, yes… The use of DMOPs is only dictated by the current Linux
>>> privcmd.c code (the irqfds created by the kernel react to events by
>>> executing HYPERVISOR_dm_op with a stored operation), we can avoid the
>>> need to modify Xen by simply expanding the privcmd driver to make
>>> "evtchn fds". Sounds good, will do.
>>>
>> Given that the event channel used by device models is exposed through
>> ioreq.vp_eport ("evtchn for notifications to/from device model"). I
>> don't think you need to expand the privcmd interface, and you should be
>> able to do this instead :
>>
>> open /dev/xen/evtchn
>> perform IOCTL_EVTCHN_BIND_INTERDOMAIN (for each guest vCPU)
>>     with remote_domain=guest_domid, remote_port=ioreq.vp_eport
>>
>> Then interact with the event channel through IOCTL_EVTCHN_NOTIFY (with
>> local port given by IOCTL_EVTCHN_BIND_INTERDOMAIN) and read/write on the
>> file descriptor.
> 
> So the reason there's currently an ioctl to bind an eventfd to fire a 
> stored DMOP is that the whole idea is to (efficiently!) support generic, 
> hypervisor-neutral device server implementations via the vhost-user 
> protocol.
> 
> Now of course, the current implementation isn't *entirely* hypervisor- 
> neutral as e.g. the vm-memory Rust crate (inside of the "neutral" vhost- 
> user device servers) does need to be built with the `xen` feature. But 
> still, that's how it works. What can be made generic is generic.
> 
> xen-vhost-frontend, which is the thing that integrates these with Xen, 
> actually used to handle the interrupts in userspace[1] by firing the 
> DMOP itself (which is where I could "just replace that with 
> IOCTL_EVTCHN_NOTIFY") but that was offloaded to the kernel with the 
> introduction of IOCTL_PRIVCMD_IRQFD[2], similarly to KVM_IRQFD.
> 

I think what would be preferable for your usecase would be to have a way 
to bind a event channel with a eventfd object, which should be a 
primitive that lives in the evtchn device.

The current interface kinda assume that you're looking to emulate a 
completely emulated virtio device with no Xen specifics, it looks like 
it's not exactly what you're implementing.

As you actually plan to switch to using event channels for notifying the 
guest, I think it would be preferable to do the same the other way 
(event channels to notify the host) so you only have event channels to 
worry about here.

> Switching back to handling the eventfd in userspace would be a literal 
> deoptimization :)
> > While throwing away the whole generic layer to do a fully integrated
> use-case-specific thing sounds more difficult/tedious than this, and not 
> necessarily desirable in general.
> 
> [1]: https://github.com/vireshk/xen-vhost-frontend/ 
> commit/06d59035f8a387c0f600931d09dfaa27b80ede7f
> [2]: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux- 
> next.git/commit/?id=f8941e6c4c712948663ec5d7bbb546f1a0f4e3f6
> 
> ~val
> 
> 



--
Teddy Astie | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.