[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Virtio in Xen on Arm (based on IOREQ concept)



(+ Andree for the vGIC).

Hi Stefano,

On 20/07/2020 21:38, Stefano Stabellini wrote:
On Fri, 17 Jul 2020, Oleksandr wrote:
*A few word about solution:*
As it was mentioned at [1], in order to implement virtio-mmio Xen on Arm
Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
it would be very interesting from a x86 PoV, as I don't think
virtio-mmio is something that you can easily use on x86 (or even use
at all).
Being honest I didn't consider virtio-pci so far. Julien's PoC (we are based
on) provides support for the virtio-mmio transport

which is enough to start working around VirtIO and is not as complex as
virtio-pci. But it doesn't mean there is no way for virtio-pci in Xen.

I think, this could be added in next steps. But the nearest target is
virtio-mmio approach (of course if the community agrees on that).
Aside from complexity and easy-of-development, are there any other
architectural reasons for using virtio-mmio?
From the hypervisor PoV, the main/only difference between virtio-mmio 
and virtio-pci is that in the latter we need to forward PCI config space 
access to the device emulator. IOW, we would need to add support for 
vPCI. This shouldn't require much more work, but I didn't want to invest 
on it for PoC.
Long term, I don't think we should tie Xen to any of the virtio 
protocol. We just need to offer facilities so users can be build easily 
virtio backend for Xen.
I am not asking because I intend to suggest to do something different
(virtio-mmio is fine as far as I can tell.) I am asking because recently
there was a virtio-pci/virtio-mmio discussion recently in Linaro and I
would like to understand if there are any implications from a Xen point
of view that I don't yet know.
virtio-mmio is going to require more work in the toolstack because we 
would need to do the memory/interrupts allocation ourself. In the case 
of virtio-pci, we only need to pass a range of memory/interrupts to the 
guest and let him decide the allocation.
Regarding virtio-pci vs virtio-mmio:
- flexibility: virtio-mmio is a good fit when you know all your devices at boot. If you want to hotplug disk/network, then virtio-pci is going to be a better fit. - interrupts: I would expect each virtio-mmio device to have its own SPI interrupts. In the case of virtio-pci, then legacy interrupts would be shared between all the PCI devices on the same host controller. This could possibly lead to performance issue if you have many devices. So for virtio-pci, we should consider MSIs.
For instance, what's your take on notifications with virtio-mmio? How
are they modelled today?
The backend will notify the frontend using an SPI. The other way around 
(frontend -> backend) is based on an MMIO write.
We have an interface to allow the backend to control whether the 
interrupt level (i.e. low, high). However, the "old" vGIC doesn't handle 
properly level interrupts. So we would end up to treat level interrupts 
as edge.
Technically, the problem is already existing with HW interrupts, but the 
HW should fire it again if the interrupt line is still asserted. Another 
issue is the interrupt may fire even if the interrupt line was 
deasserted (IIRC this caused some interesting problem with the Arch timer).
I am a bit concerned that the issue will be more proeminent for virtual 
interrupts. I know that we have some gross hack in the vpl011 to handle 
a level interrupts. So maybe it is time to switch to the new vGIC?
Are they good enough or do we need MSIs?
I am not sure whether virtio-mmio supports MSIs. However for virtio-pci, 
MSIs is going to be useful to improve performance. This may mean to 
expose an ITS, so we would need to add support for guest.
Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.