[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] virtio-mmio: add xenbus probing


  • To: Teddy Astie <teddy.astie@xxxxxxxxxx>, "Michael S. Tsirkin" <mst@xxxxxxxxxx>, Jason Wang <jasowang@xxxxxxxxxx>, Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx>, Eugenio Pérez <eperezma@xxxxxxxxxx>
  • From: Val Packett <val@xxxxxxxxxxxxxxxxxxxxxx>
  • Date: Thu, 30 Apr 2026 05:48:59 -0300
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=fm2 header.d=invisiblethingslab.com header.i="@invisiblethingslab.com" header.h="Cc:Content-Transfer-Encoding:Content-Type:Date:From:In-Reply-To:Message-ID:MIME-Version:References:Subject:To"; dkim=pass header.s=fm2 header.d=messagingengine.com header.i="@messagingengine.com" header.h="Cc:Content-Transfer-Encoding:Content-Type:Date:Feedback-ID:From:In-Reply-To:Message-ID:MIME-Version:References:Subject:To:X-ME-Proxy:X-ME-Sender"
  • Autocrypt: addr=val@xxxxxxxxxxxxxxxxxxxxxx; keydata= xm8EaFTEiRMFK4EEACIDAwQ+qzawvLuE95iu+QkRqp8P9z6XvFopWtYOaEnYf/nE8KWCnsCD jz82tdbKBpmVOdR6ViLD9tzHvaZ1NqZ9mbrszMXq09VfefoCfZp8jnA2yCT8Y4ykmv6902Ne NnlkVwrNKFZhbCBQYWNrZXR0IDx2YWxAaW52aXNpYmxldGhpbmdzbGFiLmNvbT7CswQTEwkA OxYhBAFMrro+oMGIFPc7Uc87uZxqzalRBQJoVMSJAhsDBQsJCAcCAiICBhUKCQgLAgQWAgMB Ah4HAheAAAoJEM87uZxqzalRlIIBf0cujzfSLhvib9iY8LBh8Tirgypm+hJHoY563xhP0YRS pmqZ6goIuSGpEKcW5mV3egF/TLLAOjsfroWae4giImTVOJvLOsUycxAP4O5b1Qiy+cCGsHKA nCRzrvqnPkyf4OeRznMEaFTEiRIFK4EEACIDAwSffe3tlMmmg3eKVp7SJ+CNZLN0M5qzHSCV dBBkIVvEJo+8SDg4jrx/832rxpvMCz2+x7+OHaeBHKafhOWUccYBLKqV/3nBftxCkbzXDbfY d02BY9H4wBIn0Y3GnwoIXRgDAQkJwpgEGBMJACAWIQQBTK66PqDBiBT3O1HPO7mcas2pUQUC aFTEiQIbDAAKCRDPO7mcas2pUaptAX9f7yUJLGU4C6XjMJvXd8Sz6cGTyxkngPtUyFiNqtad /GXBi3vHKYNfSrdqJ8wmZ8MBgOqWaaa1wE4/3qZU8d4RNR8mF7O40WYK/wdf1ycq1uGad8PN UDOwAqdfvuF3w8QMPw==
  • Cc: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>, Viresh Kumar <viresh.kumar@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, virtualization@xxxxxxxxxxxxxxx
  • Delivery-date: Thu, 30 Apr 2026 08:49:14 +0000
  • Feedback-id: i001e48d0:Fastmail
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 4/30/26 5:11 AM, Teddy Astie wrote:
Le 30/04/2026 à 06:06, Val Packett a écrit :
On 4/29/26 11:41 AM, Teddy Astie wrote:
Hello,

Le 29/04/2026 à 16:18, Val Packett a écrit :
[…]

I've been working on porting virtio-mmio support from Arm to x86_64,
with the goal of running vhost-user-gpu to power Wayland/GPU integration
for Qubes OS. (I'm aware of various proposals for alternative virtio
transports but virtio-mmio seems to be the only one that *is* upstream
already and just Works..) Setting up virtio-mmio through xenbus,
initially
motivated just by event channels being the only real way to get
interrupts
working on HVM, turned out to generally be quite pleasant and nice :)
Is it HVM specific, or can we also make it work for PVH (we can actually
attach a ioreq server to PVH guests) ?
Sorry, typo, I did mean PVH of course!

I've been testing this with PVH guests + PV dom0, with my PV alloc_ioreq
fix:
https://lore.kernel.org/all/20251126062124.117425-1-
val@xxxxxxxxxxxxxxxxxxxxxx/

(Time to resend that one as a non-RFC I guess…)

HVM actually does have legacy ISA interrupts (which are often used with
virtio-mmio on KVM), funnily enough, and I've tried firing those from a
DMOP but that silly thing didn't work properly.

I'd like to get some early feedback for this patch, particularly
the general stuff:

* is this whole thing acceptable in general?
* should it be extracted into a different file?
* (from the Xen side) any input on the xenstore keys, what goes where?
* anything else to keep in mind?

It does seem simple enough, so hopefully this can be done?

The corresponding userspace-side WIP is available at:
https://github.com/QubesOS/xen-vhost-frontend

And the required DMOP for firing the evtchn events will be sent
to xen-devel shortly as well.
Could that be done through evtchn_send (or its userland counterpart) ?
Actually, yes… The use of DMOPs is only dictated by the current Linux
privcmd.c code (the irqfds created by the kernel react to events by
executing HYPERVISOR_dm_op with a stored operation), we can avoid the
need to modify Xen by simply expanding the privcmd driver to make
"evtchn fds". Sounds good, will do.

Given that the event channel used by device models is exposed through
ioreq.vp_eport ("evtchn for notifications to/from device model"). I
don't think you need to expand the privcmd interface, and you should be
able to do this instead :

open /dev/xen/evtchn
perform IOCTL_EVTCHN_BIND_INTERDOMAIN (for each guest vCPU)
    with remote_domain=guest_domid, remote_port=ioreq.vp_eport

Then interact with the event channel through IOCTL_EVTCHN_NOTIFY (with
local port given by IOCTL_EVTCHN_BIND_INTERDOMAIN) and read/write on the
file descriptor.

So the reason there's currently an ioctl to bind an eventfd to fire a stored DMOP is that the whole idea is to (efficiently!) support generic, hypervisor-neutral device server implementations via the vhost-user protocol.

Now of course, the current implementation isn't *entirely* hypervisor-neutral as e.g. the vm-memory Rust crate (inside of the "neutral" vhost-user device servers) does need to be built with the `xen` feature. But still, that's how it works. What can be made generic is generic.

xen-vhost-frontend, which is the thing that integrates these with Xen, actually used to handle the interrupts in userspace[1] by firing the DMOP itself (which is where I could "just replace that with IOCTL_EVTCHN_NOTIFY") but that was offloaded to the kernel with the introduction of IOCTL_PRIVCMD_IRQFD[2], similarly to KVM_IRQFD.

Switching back to handling the eventfd in userspace would be a literal deoptimization :)

While throwing away the whole generic layer to do a fully integrated use-case-specific thing sounds more difficult/tedious than this, and not necessarily desirable in general.

[1]: https://github.com/vireshk/xen-vhost-frontend/commit/06d59035f8a387c0f600931d09dfaa27b80ede7f [2]: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=f8941e6c4c712948663ec5d7bbb546f1a0f4e3f6

~val




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.