[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Thu, 16 Nov 2023 15:07:10 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=u/DB0qMqR38wx9CPU6U/tc7nfenuj0L8YpiJ9UJmv48=; b=PxO0B11JenlEegud9+jt5JNrU2utC7aX1OuWZ7QoyXVljjbLZz/vga4TAvngLnHo6Hkf2KfFUIYFV+GT50rVcTDkpRWG3xdUveVNoV3NYQNeNhnNIDzC6IhKHQ3NJyWBQGDk3jDhEr2Oy4WBBuUyFN7SZyp+3hX9g1ow+Xs7kEaV1vgAK84EvUAXG8NrzOtwgxbjoVDASfSxkGrao0ckjSGUcJyrgADMlWwf2TexCdMhSKUrA+J7Q1NBOZWYZOoPeoWr/FMmvw3jpkbl1rerT32Rlh5W1tSRvUuGTPufBVq8ytFb7UACi482ynnB9tLvw2TdZVAX/12t0Yfe8KyBqA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=A8jBJszgw6DAgd/Nj45aw5XNVc9UYS3awdXPvzdN5x5+heyV2tvxGtlajlDrTiScxbxukQcXB6MBlv9uS/x7xUDNqLSwsHLRaLHhUA/z1K6/a8FZTCyDuF6AX/qzQUg647UOYPmln37xCQE7JVnO9jyC96nH9Hbus1TXRV5iY0tX7gQQc4AwxaVqdm+/y1ePuVv/F+JKx2uZK0Vsk2YA2uPnZ6AcpUh76mCTaWPzCs4ODDBO0XH9YH1MjMvP0rseFe/yqKj2lBcEzdcM8sArGYmFRMeisn4XFvD11ElPd7mYmjO1hD/XBrmvTcid+1o9h/JBL3bopG3Ay1YJtanZrQ==
  • Cc: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, "vikram.garhwal@xxxxxxx" <vikram.garhwal@xxxxxxx>, Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Delivery-date: Thu, 16 Nov 2023 15:16:38 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHaF7/0FLT7PaxCl0mKzTq8kNpAxLB7mIqAgABuuwCAAP6dAA==
  • Thread-topic: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci

Hi Stefano,

Stefano Stabellini <sstabellini@xxxxxxxxxx> writes:

> + Stewart, Vikram
>
> On Wed, 15 Nov 2023, Oleksandr Tyshchenko wrote:
>> On 15.11.23 14:33, Julien Grall wrote:
>> > Thanks for adding support for virtio-pci in Xen. I have some questions.
>> > 
>> > On 15/11/2023 11:26, Sergiy Kibrik wrote:
>> >> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>> >>
>> >> In order to enable more use-cases such as having multiple
>> >> device-models (Qemu) running in different backend domains which provide
>> >> virtio-pci devices for the same guest, we allocate and expose one
>> >> PCI host bridge for every virtio backend domain for that guest.
>> > 
>> > OOI, why do you need to expose one PCI host bridge for every stubdomain?
>> > 
>> > In fact looking at the next patch, it seems you are handling some of the 
>> > hostbridge request in Xen. This is adds a bit more confusion.
>> > 
>> > I was expecting the virtual PCI device would be in the vPCI and each 
>> > Device emulator would advertise which BDF they are covering.
>> 
>> 
>> This patch series only covers use-cases where the device emulator 
>> handles the *entire* PCI Host bridge and PCI (virtio-pci) devices behind 
>> it (i.e. Qemu). Also this patch series doesn't touch vPCI/PCI 
>> pass-through resources, handling, accounting, nothing. From the 
>> hypervisor we only need a help to intercept the config space accesses 
>> happen in a range [GUEST_VIRTIO_PCI_ECAM_BASE ... 
>> GUEST_VIRTIO_PCI_ECAM_BASE + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE] and 
>> forward them to the linked device emulator (if any), that's all.
>> 
>> It is not possible (with current series) to run device emulators what
>> emulate only separate PCI (virtio-pci) devices. For it to be possible, I 
>> think, much more changes are required than current patch series does. 
>> There at least should be special PCI Host bridge emulation in Xen (or 
>> reuse vPCI) for the integration. Also Xen should be in charge of forming 
>> resulting PCI interrupt based on each PCI device level signaling (if we 
>> use legacy interrupts), some kind of x86's XEN_DMOP_set_pci_intx_level, 
>> etc. Please note, I am not saying this is not possible in general, 
>> likely it is possible, but initial patch series doesn't cover these 
>> use-cases)
>>
>> We expose one PCI host bridge per virtio backend domain. This is a 
>> separate PCI host bridge to combine all virtio-pci devices running in 
>> the same backend domain (in the same device emulator currently).
>> The examples:
>> - if only one domain runs Qemu which servers virtio-blk, virtio-net, 
>> virtio-console devices for DomU - only single PCI Host bridge will be 
>> exposed for DomU
>> - if we add another domain to run Qemu to serve additionally virtio-gpu, 
>> virtio-input and virtio-snd for the *same* DomU - we expose second PCI 
>> Host bridge for DomU
>> 
>> I am afraid, we cannot end up exposing only single PCI Host bridge with 
>> current model (if we use device emulators running in different domains 
>> that handles the *entire* PCI Host bridges), this won't work.
>  
>
> We were discussing the topic of vPCI and Virtio PCI just this morning
> with Stewart and Vikram. We also intend to make them work well together
> in the next couple of months (great timing!!)
>
> However, our thinking is to go with the other approach Julien
> suggested: a single PCI Root Complex emulated in Xen by vPCI. QEMU would
> register individual PCI devices against it.
>
> Vikram, Stewart, please comment. Our understanding is that it should be
> possible to make QEMU virtio-pci work with vPCI with relatively minor
> efforts and AMD volunteers to do the work in the next couple of months
> on the vPCI side.
>
>
> Although it should be possible to make both approaches work at the same
> time, given that it would seem that EPAM and AMD have very similar
> requirements, I suggest we work together and collaborate on a single
> approach going forward that works best for everyone.
>
>
> Let me start by saying that if we can get away with it, I think that a
> single PCI Root Complex in Xen would be best because it requires less
> complexity. Why emulate 2/3 PCI Root Complexes if we can emulate only
> one?

Well, in fact we tried similar setup, this was in the first version of
virtio-pci support. But we had a couple of issues with this. For
instance, this might conflict with PCI passthrough devices, with virtio
devices that have back-ends in different domains, etc. I am no saying
that this is impossible, but this just involves more moving parts.

With my vPCI patch series in place, hypervisor itself assigns BDFs for
passed-through devices. Toolstack needs to get this information to know
which BDFs are free and can be used by virtio-pci.

-- 
WBR, Volodymyr


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.