[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci



Hi,

Thanks for adding support for virtio-pci in Xen. I have some questions.

On 15/11/2023 11:26, Sergiy Kibrik wrote:
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>

In order to enable more use-cases such as having multiple
device-models (Qemu) running in different backend domains which provide
virtio-pci devices for the same guest, we allocate and expose one
PCI host bridge for every virtio backend domain for that guest.

OOI, why do you need to expose one PCI host bridge for every stubdomain?

In fact looking at the next patch, it seems you are handling some of the hostbridge request in Xen. This is adds a bit more confusion.

I was expecting the virtual PCI device would be in the vPCI and each Device emulator would advertise which BDF they are covering.


For that purpose, reserve separate virtio-pci resources (memory and SPI range
for Legacy PCI interrupts) up to 8 possible PCI hosts (to be aligned with

Do you mean host bridge rather than host?

MAX_NR_IOREQ_SERVERS) and allocate a host per backend domain. The PCI host
details including its host_id to be written to dedicated Xenstore node for
the device-model to retrieve.

So which with approach, who is decide which BDF will be used for a given virtio PCI device?


Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx>
---
  xen/include/public/arch-arm.h | 21 +++++++++++++++++++++
  1 file changed, 21 insertions(+)

diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index a25e87dbda..e6c9cd5335 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -466,6 +466,19 @@ typedef uint64_t xen_callback_t;
  #define GUEST_VPCI_MEM_ADDR                 xen_mk_ullong(0x23000000)
  #define GUEST_VPCI_MEM_SIZE                 xen_mk_ullong(0x10000000)
+/*
+ * 16 MB is reserved for virtio-pci configuration space based on calculation
+ * 8 bridges * 2 buses x 32 devices x 8 functions x 4 KB = 16 MB

Can you explain how youd ecided the "2"?

+ */
+#define GUEST_VIRTIO_PCI_ECAM_BASE          xen_mk_ullong(0x33000000)
+#define GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE    xen_mk_ullong(0x01000000)
+#define GUEST_VIRTIO_PCI_HOST_ECAM_SIZE     xen_mk_ullong(0x00200000)
+
+/* 64 MB is reserved for virtio-pci memory */
+#define GUEST_VIRTIO_PCI_ADDR_TYPE_MEM    xen_mk_ullong(0x02000000)
+#define GUEST_VIRTIO_PCI_MEM_ADDR         xen_mk_ullong(0x34000000)
+#define GUEST_VIRTIO_PCI_MEM_SIZE         xen_mk_ullong(0x04000000)
+
  /*
   * 16MB == 4096 pages reserved for guest to use as a region to map its
   * grant table in.
@@ -476,6 +489,11 @@ typedef uint64_t xen_callback_t;
  #define GUEST_MAGIC_BASE  xen_mk_ullong(0x39000000)
  #define GUEST_MAGIC_SIZE  xen_mk_ullong(0x01000000)
+/* 64 MB is reserved for virtio-pci Prefetch memory */

This doesn't seem a lot depending on your use case. Can you details how you can up with "64 MB"?

+#define GUEST_VIRTIO_PCI_ADDR_TYPE_PREFETCH_MEM    xen_mk_ullong(0x42000000)
+#define GUEST_VIRTIO_PCI_PREFETCH_MEM_ADDR         xen_mk_ullong(0x3a000000)
+#define GUEST_VIRTIO_PCI_PREFETCH_MEM_SIZE         xen_mk_ullong(0x04000000)
+
  #define GUEST_RAM_BANKS   2
/*
@@ -515,6 +533,9 @@ typedef uint64_t xen_callback_t;
  #define GUEST_VIRTIO_MMIO_SPI_FIRST   33
  #define GUEST_VIRTIO_MMIO_SPI_LAST    43
+#define GUEST_VIRTIO_PCI_SPI_FIRST 44
+#define GUEST_VIRTIO_PCI_SPI_LAST    76

Just to confirm this is 4 interrupts per PCI host bridge? If so, can this be clarified in a comment?

Also, above you said that the host ID will be written to Xenstore. But who will decide which SPI belongs to which host bridge?

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.