[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci


  • To: Julien Grall <julien@xxxxxxx>, Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>
  • Date: Wed, 15 Nov 2023 19:38:58 +0000
  • Accept-language: en-US, ru-RU
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=f74twMPlyyNfZBYsvlbbeQYuwB/qF1IsQ1Qbn18Fhck=; b=S5ALl1480FdMNG64EX6Er7A6pGrLv5MHTix2XYlLs6bRCOP6Tv/wAeOsavf5p081sTiCIC8y+ZsfdsXynPPyF5bHAsgG9B31CPW8m3Ig2w0guLJI0rtHTYImsZFLO/mRX+FkR9Eu4XFVzSLS63XCLBKEaCLn9qVXk4j3F+aNc9gp3WPPYJ6s0iqgFPY7s/CoplxArVC0LvNr1OUe9NCHtSyco8bvArXgvI2pOwTWz6Lvc6h5vXqiNOJZ309u+XYvRoTEUL+bxS65ZfV83SvXrNlVPQ0HekLYoDLCaTC6esW0iG1tiyz4DlwHYFA/VUj9eHe3qY41TJ3Hbv3qm+cGpA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Me1fSBOWzjdiLnFszU3DJRTWlp0KWtgBud0ptuLeP5H1TjbGE8CyEUHa5PI5Pvia240sTUYBDG7sb1V4PD4FPW1Rj2XSaOYc3vEPWbHddot+t4jm5/mP9vVq3MU9QK03tJdEcMfTFm6OMYSKWO2Z8/K4J9jC1ID6ShlTz6skC+35ErScVBFhCBYpFDY6o68OUpqCeJTWiDPguIuK4yruefkY1+xAuY63J+U47wcUD67kmzv/4bY/ehFsTlpDO2dpcXFBfCS7kc84ODb3DALN3GZTFI4et5dWNjaPePaipY54IFK30ERxO861v/WM8DovvALzpDHjdxTkwGEs26QRyA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Wed, 15 Nov 2023 19:39:28 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHaF7/0l4ZQp7ixjkGnkGM6dDamtLB7mIqAgAALAoCAAAwrAIAABRsAgAASbYA=
  • Thread-topic: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci


On 15.11.23 20:33, Julien Grall wrote:
> Hi Oleksandr,

Hello Julien


> 
> On 15/11/2023 18:14, Oleksandr Tyshchenko wrote:
>> On 15.11.23 19:31, Julien Grall wrote:
>>> On 15/11/2023 16:51, Oleksandr Tyshchenko wrote:
>>>> On 15.11.23 14:33, Julien Grall wrote:
>>>>> Thanks for adding support for virtio-pci in Xen. I have some 
>>>>> questions.
>>>>>
>>>>> On 15/11/2023 11:26, Sergiy Kibrik wrote:
>>>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>>>>>>
>>>>>> In order to enable more use-cases such as having multiple
>>>>>> device-models (Qemu) running in different backend domains which 
>>>>>> provide
>>>>>> virtio-pci devices for the same guest, we allocate and expose one
>>>>>> PCI host bridge for every virtio backend domain for that guest.
>>>>>
>>>>> OOI, why do you need to expose one PCI host bridge for every 
>>>>> stubdomain?
>>>>>
>>>>> In fact looking at the next patch, it seems you are handling some 
>>>>> of the
>>>>> hostbridge request in Xen. This is adds a bit more confusion.
>>>>>
>>>>> I was expecting the virtual PCI device would be in the vPCI and each
>>>>> Device emulator would advertise which BDF they are covering.
>>>>
>>>>
>>>> This patch series only covers use-cases where the device emulator
>>>> handles the *entire* PCI Host bridge and PCI (virtio-pci) devices 
>>>> behind
>>>> it (i.e. Qemu). Also this patch series doesn't touch vPCI/PCI
>>>> pass-through resources, handling, accounting, nothing.
>>>
>>> I understood you want to one Device Emulator to handle the entire PCI
>>> host bridge. But...
>>>
>>>   From the
>>>> hypervisor we only need a help to intercept the config space accesses
>>>> happen in a range [GUEST_VIRTIO_PCI_ECAM_BASE ...
>>>> GUEST_VIRTIO_PCI_ECAM_BASE + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE] and
>>>> forward them to the linked device emulator (if any), that's all.
>>>
>>> ... I really don't see why you need to add code in Xen to trap the
>>> region. If QEMU is dealing with the hostbridge, then it should be able
>>> to register the MMIO region and then do the translation.
>>
>>
>> Hmm, sounds surprising I would say. Are you saying that unmodified Qemu
>> will work if we drop #5?
> 
> I don't know if an unmodified QEMU will work. My point is I don't view 
> the patch in Xen necessary. You should be able to tell QEMU "here is the 
> ECAM region, please emulate an hostbridge". QEMU will then register the 
> region to Xen and all the accesses will be forwarded. >
> In the future we may need a patch similar to #5 if we want to have 
> multiple DM using the same hostbridge. But this is a different 
> discussion and the patch would need some rework.


ok

> 
> The ioreq.c code was always meant to be generic and is always for every 
> emulated MMIO. So you want to limit any change in it. Checking the MMIO 
> region belongs to the hostbridge and doing the translation is IMHO not a 
> good idea to do in ioreq.c. Instead you want to do the conversion from 
> MMIO to (sbdf, offset) in virtio_pci_mmio{read, write}(). So the job of 
> ioreq.c is to simply find the correct Device Model and forward it.



Are you about virtio_pci_ioreq_server_get_addr() called from 
arch_ioreq_server_get_type_addr()? If so and if I am not mistaken the 
x86 also check what PCI device is targeted there.

But, I am not against the suggestion, I agree with it.


> 
> I also don't see why the feature is gated by has_vcpi(). They are two 
> distinct features (at least in your current model).

yes, you are correct. In #5 virtio-pci mmio handlers are still 
registered in domain_vpci_init() (which is gated by has_vcpi()), etc


> 
> Cheers,
> 

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.