[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Fri, 17 Nov 2023 00:23:20 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MzYVP+bjllP06yFDXRnAeEPwHMmHpAPVPwmTc967Exs=; b=S72oVX0MGbmaV3XPv4Jg8wYJJ8qyYlDHn91mBj062h+3hJ1bPzeYqeUwGMhjGuD96h8pGU7no7mp5foE+zprvPLqwbemHuX2ztDHIHNULknROfHL7QmOmQcU2f8YenNjYvczPg6g8IfdjvV/+dM3Y2LGf3i7rAn6oPVpj+yl6Io4NdrbHpDw5PryKLcy66vOqRKpB07nve5u0fqZEhMoaBuTNEOJtDzfCPs957LyaWqVGqRq7mWdSaGIlPSpV1dLExxBY3AC1LtJXipZ3fJYdMmk+WMQ6hKpxeKtiC+ffxaMIbsiehV1N4JGnfKvmtXsnZaYEs13QbHg9i6ph0FTDQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RtcfnA9GRcpryy0wVpIHgHX5LtSARekptl7sZc/iL9danjuH8/MJ6fiXpW8k/sT223AtHkfB1oggnOpxXNahmzcBOaabZGPJoBUlIk2mLoVfdIKJ7dBy2QAq10w+r0w/vC38AHOAu9Mfk5nmIW2noaypbJ/0JmOh4jp+xaaNFqY5NAD3CrwDyhGRk4yXPxQECvcGF/SSqvGDVU1rY33RKtZrRYEkFlN8LmvifOQqE1T+CHo0SdmlVikUc4/uqyjNIM5p11KGKpDNM1Bt3E4L8xomFsJKH22xWnCYfhX7aCJMiTPxWp6DyiOFSXf45kcQA+3XSZz5Fazn3pi84qCjDg==
  • Cc: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, "vikram.garhwal@xxxxxxx" <vikram.garhwal@xxxxxxx>, Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Delivery-date: Fri, 17 Nov 2023 00:23:49 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHaF7/0FLT7PaxCl0mKzTq8kNpAxLB7mIqAgABuuwCAAP6dAIAAjTaAgAAVo4A=
  • Thread-topic: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci

Hi Stefano,

Stefano Stabellini <sstabellini@xxxxxxxxxx> writes:

> On Thu, 16 Nov 2023, Volodymyr Babchuk wrote:
>> Hi Stefano,
>> 
>> Stefano Stabellini <sstabellini@xxxxxxxxxx> writes:
>> 
>> > + Stewart, Vikram
>> >
>> > On Wed, 15 Nov 2023, Oleksandr Tyshchenko wrote:
>> >> On 15.11.23 14:33, Julien Grall wrote:
>> >> > Thanks for adding support for virtio-pci in Xen. I have some questions.
>> >> > 
>> >> > On 15/11/2023 11:26, Sergiy Kibrik wrote:
>> >> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>> >> >>
>> >> >> In order to enable more use-cases such as having multiple
>> >> >> device-models (Qemu) running in different backend domains which provide
>> >> >> virtio-pci devices for the same guest, we allocate and expose one
>> >> >> PCI host bridge for every virtio backend domain for that guest.
>> >> > 
>> >> > OOI, why do you need to expose one PCI host bridge for every stubdomain?
>> >> > 
>> >> > In fact looking at the next patch, it seems you are handling some of 
>> >> > the 
>> >> > hostbridge request in Xen. This is adds a bit more confusion.
>> >> > 
>> >> > I was expecting the virtual PCI device would be in the vPCI and each 
>> >> > Device emulator would advertise which BDF they are covering.
>> >> 
>> >> 
>> >> This patch series only covers use-cases where the device emulator 
>> >> handles the *entire* PCI Host bridge and PCI (virtio-pci) devices behind 
>> >> it (i.e. Qemu). Also this patch series doesn't touch vPCI/PCI 
>> >> pass-through resources, handling, accounting, nothing. From the 
>> >> hypervisor we only need a help to intercept the config space accesses 
>> >> happen in a range [GUEST_VIRTIO_PCI_ECAM_BASE ... 
>> >> GUEST_VIRTIO_PCI_ECAM_BASE + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE] and 
>> >> forward them to the linked device emulator (if any), that's all.
>> >> 
>> >> It is not possible (with current series) to run device emulators what
>> >> emulate only separate PCI (virtio-pci) devices. For it to be possible, I 
>> >> think, much more changes are required than current patch series does. 
>> >> There at least should be special PCI Host bridge emulation in Xen (or 
>> >> reuse vPCI) for the integration. Also Xen should be in charge of forming 
>> >> resulting PCI interrupt based on each PCI device level signaling (if we 
>> >> use legacy interrupts), some kind of x86's XEN_DMOP_set_pci_intx_level, 
>> >> etc. Please note, I am not saying this is not possible in general, 
>> >> likely it is possible, but initial patch series doesn't cover these 
>> >> use-cases)
>> >>
>> >> We expose one PCI host bridge per virtio backend domain. This is a 
>> >> separate PCI host bridge to combine all virtio-pci devices running in 
>> >> the same backend domain (in the same device emulator currently).
>> >> The examples:
>> >> - if only one domain runs Qemu which servers virtio-blk, virtio-net, 
>> >> virtio-console devices for DomU - only single PCI Host bridge will be 
>> >> exposed for DomU
>> >> - if we add another domain to run Qemu to serve additionally virtio-gpu, 
>> >> virtio-input and virtio-snd for the *same* DomU - we expose second PCI 
>> >> Host bridge for DomU
>> >> 
>> >> I am afraid, we cannot end up exposing only single PCI Host bridge with 
>> >> current model (if we use device emulators running in different domains 
>> >> that handles the *entire* PCI Host bridges), this won't work.
>> >  
>> >
>> > We were discussing the topic of vPCI and Virtio PCI just this morning
>> > with Stewart and Vikram. We also intend to make them work well together
>> > in the next couple of months (great timing!!)
>> >
>> > However, our thinking is to go with the other approach Julien
>> > suggested: a single PCI Root Complex emulated in Xen by vPCI. QEMU would
>> > register individual PCI devices against it.
>> >
>> > Vikram, Stewart, please comment. Our understanding is that it should be
>> > possible to make QEMU virtio-pci work with vPCI with relatively minor
>> > efforts and AMD volunteers to do the work in the next couple of months
>> > on the vPCI side.
>> >
>> >
>> > Although it should be possible to make both approaches work at the same
>> > time, given that it would seem that EPAM and AMD have very similar
>> > requirements, I suggest we work together and collaborate on a single
>> > approach going forward that works best for everyone.
>> >
>> >
>> > Let me start by saying that if we can get away with it, I think that a
>> > single PCI Root Complex in Xen would be best because it requires less
>> > complexity. Why emulate 2/3 PCI Root Complexes if we can emulate only
>> > one?
>> 
>> Well, in fact we tried similar setup, this was in the first version of
>> virtio-pci support. But we had a couple of issues with this. For
>> instance, this might conflict with PCI passthrough devices, with virtio
>> devices that have back-ends in different domains, etc. I am no saying
>> that this is impossible, but this just involves more moving parts.
>> 
>> With my vPCI patch series in place, hypervisor itself assigns BDFs for
>> passed-through devices. Toolstack needs to get this information to know
>> which BDFs are free and can be used by virtio-pci.
>
> I'll premise that I don't really have an opinion on how the virtual BDF
> allocation should happen.
>
> But I'll ask the opposite question that Julien asked: if it is Xen that
> does the allocation, that's fine, then couldn't we arrange so that Xen
> also does the allocation in the toolstack case too (simply by picking
> the first available virtual BDF)?

Actually, this was my intention as well. As I said in the another email,
we just need to extend or add another domctl to manage vBFDs.

-- 
WBR, Volodymyr


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.