[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci


  • To: Julien Grall <julien@xxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Sergiy Kibrik <sergiy_kibrik@xxxxxxxx>
  • Date: Fri, 17 Nov 2023 15:19:15 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0hjwN7rnQC1+YhY8pragpObPEPmXhdDg0+gUBvqJq3g=; b=aufiUJaB75rBSQOh/Iz3RdsEJpgDyP0DzPjbPHSwKuC3MHO2QbCW5fuNGXHyFM5UAi+P/uSB9NKcit5HOlLNB2+fXnUkkz86srG/EBxFsmKmXm1Pu8cCeA/ylzUVglXMB9N0uX7V4WHqLoEAa3HjIVQ4351CgaBeRaVI8NlrS+sViaHOVMFU7OegUimUA9o+Md/o5Hbf1yWFTSt/qQM++M6WvmN/1QkLGV54+Wx/pyBgYRWimwKqW9Wtg4Fg53FYOjz7pXdbJUbgZ/Mec3yV2k8k+oJRNN5nIj+gCrKtFIMyT4RCHQuQffDa9L17GbVfxwB3WAnHBz2ErtQa2773+w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JNKa8tyRDl1oEMg+32tfre2FiG6VXLEkV8I7sZjORtLSmlZedXkHMBXXWLhL9sLfhHEMce4agRHUaolzmieYxAzGSpHto+5UR5NEimuTyul2Ln58yC5NV4+lppyocPQ903n7S919WYb8eHHoQGFBuLBHpijwXlO9y5+ooq5B5jyH9vxPbk7fpmBSZ5JTcpIfFfjB/kASsUJ3/YqrtMFahCytBRl9hLbF4xp+lAtdMfh2B4setn96QpCFP+7usEEOyndsWgY+WOnGhGAw1hNIv2KQdfXQgDXv0nVT4D2oVbvDAPOAD0kkwioe+SZKHRotFjxqaMc07wlqjyVQsNqr2g==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Fri, 17 Nov 2023 13:19:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

hi Julien, Oleksandr,

[..]


This patch series only covers use-cases where the device emulator
handles the *entire* PCI Host bridge and PCI (virtio-pci) devices behind
it (i.e. Qemu). Also this patch series doesn't touch vPCI/PCI
pass-through resources, handling, accounting, nothing.

I understood you want to one Device Emulator to handle the entire PCI host bridge. But...

 From the
hypervisor we only need a help to intercept the config space accesses
happen in a range [GUEST_VIRTIO_PCI_ECAM_BASE ...
GUEST_VIRTIO_PCI_ECAM_BASE + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE] and
forward them to the linked device emulator (if any), that's all.

... I really don't see why you need to add code in Xen to trap the region. If QEMU is dealing with the hostbridge, then it should be able to register the MMIO region and then do the translation.


[..]

I am afraid, we cannot end up exposing only single PCI Host bridge with
current model (if we use device emulators running in different domains
that handles the *entire* PCI Host bridges), this won't work.

That makes sense and it is fine. But see above, I think only the #2 is necessary for the hypervisor. Patch #5 should not be necessary at all.

[...]

I did checks w/o patch #5 and can confirm that indeed -- qemu & xen can do this work without additional modifications to qemu code. So I'll drop
this patch from this series.

[..]
+/*
+ * 16 MB is reserved for virtio-pci configuration space based on
calculation
+ * 8 bridges * 2 buses x 32 devices x 8 functions x 4 KB = 16 MB

Can you explain how youd ecided the "2"?

good question, we have a limited free space available in memory layout
(we had difficulties to find a suitable holes) also we don't expect a
lot of virtio-pci devices, so "256" used vPCI would be too much. It was
decided to reduce significantly, but select maximum to fit into free
space, with having "2" buses we still fit into the chosen holes.

If you don't expect a lot of virtio devices, then why do you need two buses? Wouldn't one be sufficient?


one should be reasonably sufficient, I agree




+ */
+#define GUEST_VIRTIO_PCI_ECAM_BASE          xen_mk_ullong(0x33000000)
+#define GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE    xen_mk_ullong(0x01000000)
+#define GUEST_VIRTIO_PCI_HOST_ECAM_SIZE     xen_mk_ullong(0x00200000)
+
+/* 64 MB is reserved for virtio-pci memory */
+#define GUEST_VIRTIO_PCI_ADDR_TYPE_MEM    xen_mk_ullong(0x02000000)
+#define GUEST_VIRTIO_PCI_MEM_ADDR         xen_mk_ullong(0x34000000)
+#define GUEST_VIRTIO_PCI_MEM_SIZE         xen_mk_ullong(0x04000000)
+
   /*
    * 16MB == 4096 pages reserved for guest to use as a region to map its
    * grant table in.
@@ -476,6 +489,11 @@ typedef uint64_t xen_callback_t;
   #define GUEST_MAGIC_BASE  xen_mk_ullong(0x39000000)
   #define GUEST_MAGIC_SIZE  xen_mk_ullong(0x01000000)
+/* 64 MB is reserved for virtio-pci Prefetch memory */

This doesn't seem a lot depending on your use case. Can you details how
you can up with "64 MB"?

the same calculation as it was done configuration space. It was observed
that only 16K is used per virtio-pci device (maybe it can be bigger for
usual PCI device, I don't know). Please look at the example of DomU log
below (to strings that contain "*BAR 4: assigned*"):

What about virtio-gpu? I would expect a bit more memory is necessary for that use case.

Any case, what I am looking for is for some explanation in the commit message of the limits. I don't particularly care about the exact limit because this is not part of a stable ABI.

sure, I'll put a bit more explanation in both comment and commit message. Should I post updated patch series, with updated resources and without patch #5, or shall we wait for some more comments here?

--
regards,
 Sergiy



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.