[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci



On Thu, 16 Nov 2023, Volodymyr Babchuk wrote:
> Hi Stefano,
> 
> Stefano Stabellini <sstabellini@xxxxxxxxxx> writes:
> 
> > + Stewart, Vikram
> >
> > On Wed, 15 Nov 2023, Oleksandr Tyshchenko wrote:
> >> On 15.11.23 14:33, Julien Grall wrote:
> >> > Thanks for adding support for virtio-pci in Xen. I have some questions.
> >> > 
> >> > On 15/11/2023 11:26, Sergiy Kibrik wrote:
> >> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
> >> >>
> >> >> In order to enable more use-cases such as having multiple
> >> >> device-models (Qemu) running in different backend domains which provide
> >> >> virtio-pci devices for the same guest, we allocate and expose one
> >> >> PCI host bridge for every virtio backend domain for that guest.
> >> > 
> >> > OOI, why do you need to expose one PCI host bridge for every stubdomain?
> >> > 
> >> > In fact looking at the next patch, it seems you are handling some of the 
> >> > hostbridge request in Xen. This is adds a bit more confusion.
> >> > 
> >> > I was expecting the virtual PCI device would be in the vPCI and each 
> >> > Device emulator would advertise which BDF they are covering.
> >> 
> >> 
> >> This patch series only covers use-cases where the device emulator 
> >> handles the *entire* PCI Host bridge and PCI (virtio-pci) devices behind 
> >> it (i.e. Qemu). Also this patch series doesn't touch vPCI/PCI 
> >> pass-through resources, handling, accounting, nothing. From the 
> >> hypervisor we only need a help to intercept the config space accesses 
> >> happen in a range [GUEST_VIRTIO_PCI_ECAM_BASE ... 
> >> GUEST_VIRTIO_PCI_ECAM_BASE + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE] and 
> >> forward them to the linked device emulator (if any), that's all.
> >> 
> >> It is not possible (with current series) to run device emulators what
> >> emulate only separate PCI (virtio-pci) devices. For it to be possible, I 
> >> think, much more changes are required than current patch series does. 
> >> There at least should be special PCI Host bridge emulation in Xen (or 
> >> reuse vPCI) for the integration. Also Xen should be in charge of forming 
> >> resulting PCI interrupt based on each PCI device level signaling (if we 
> >> use legacy interrupts), some kind of x86's XEN_DMOP_set_pci_intx_level, 
> >> etc. Please note, I am not saying this is not possible in general, 
> >> likely it is possible, but initial patch series doesn't cover these 
> >> use-cases)
> >>
> >> We expose one PCI host bridge per virtio backend domain. This is a 
> >> separate PCI host bridge to combine all virtio-pci devices running in 
> >> the same backend domain (in the same device emulator currently).
> >> The examples:
> >> - if only one domain runs Qemu which servers virtio-blk, virtio-net, 
> >> virtio-console devices for DomU - only single PCI Host bridge will be 
> >> exposed for DomU
> >> - if we add another domain to run Qemu to serve additionally virtio-gpu, 
> >> virtio-input and virtio-snd for the *same* DomU - we expose second PCI 
> >> Host bridge for DomU
> >> 
> >> I am afraid, we cannot end up exposing only single PCI Host bridge with 
> >> current model (if we use device emulators running in different domains 
> >> that handles the *entire* PCI Host bridges), this won't work.
> >  
> >
> > We were discussing the topic of vPCI and Virtio PCI just this morning
> > with Stewart and Vikram. We also intend to make them work well together
> > in the next couple of months (great timing!!)
> >
> > However, our thinking is to go with the other approach Julien
> > suggested: a single PCI Root Complex emulated in Xen by vPCI. QEMU would
> > register individual PCI devices against it.
> >
> > Vikram, Stewart, please comment. Our understanding is that it should be
> > possible to make QEMU virtio-pci work with vPCI with relatively minor
> > efforts and AMD volunteers to do the work in the next couple of months
> > on the vPCI side.
> >
> >
> > Although it should be possible to make both approaches work at the same
> > time, given that it would seem that EPAM and AMD have very similar
> > requirements, I suggest we work together and collaborate on a single
> > approach going forward that works best for everyone.
> >
> >
> > Let me start by saying that if we can get away with it, I think that a
> > single PCI Root Complex in Xen would be best because it requires less
> > complexity. Why emulate 2/3 PCI Root Complexes if we can emulate only
> > one?
> 
> Well, in fact we tried similar setup, this was in the first version of
> virtio-pci support. But we had a couple of issues with this. For
> instance, this might conflict with PCI passthrough devices, with virtio
> devices that have back-ends in different domains, etc. I am no saying
> that this is impossible, but this just involves more moving parts.
> 
> With my vPCI patch series in place, hypervisor itself assigns BDFs for
> passed-through devices. Toolstack needs to get this information to know
> which BDFs are free and can be used by virtio-pci.

I'll premise that I don't really have an opinion on how the virtual BDF
allocation should happen.

But I'll ask the opposite question that Julien asked: if it is Xen that
does the allocation, that's fine, then couldn't we arrange so that Xen
also does the allocation in the toolstack case too (simply by picking
the first available virtual BDF)?

One way or the other it should be OK as long as we are consistent.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.