[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [XEN RFC PATCH v5 3/5] xen/public: Introduce PV-IOMMU hypercall interface
- To: Teddy Astie <teddy.astie@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
- From: Jason Andryuk <jason.andryuk@xxxxxxx>
- Date: Thu, 30 Jan 2025 15:17:01 -0500
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vates.tech smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=O99zEQuAGJEov1QDHpiKvSyQOqTcgqLzoiZX6jyfpfI=; b=X5ufYE/2ODjhxbKlJDGaEjEWVi9m5Nkevd3kiZDXmRkjk5zMJYSXwM9wsKCA1koINO3OIcXB1m2ZOo8BwTV7ZcAjtFtLcya9xmMOsRkWyELd/AbDuwAYcS3Zxdwwytk/UPohACkrb3b58XmaL3fQfV3GqLOryMlj++1tXGyu1UBIMb7ry2V6Bg3XwUTcSbtb8frVAW6wppP22e4zNKw1NrXfA3U7S8wOyOZ1/0SITdtIkc83VqlZzfTqvd2WlSQraPVxRiEncNvRSH5DpiQ/84+DwYwo+U2EN1iwLhK2NoWjchwTdlC6oow7UgMLqCHSMqKkt5sH/HVUHsmpMT3Eyw==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=KUvNDmS+ZMJDhVixfEGTmG3GnO2oitsCjqbZqOnxlCYbQFTXEHnPcx1D4uJJVvNAKypAb9uNe6oJvfajaPFRHr5HMpAQ9GAMZwX5jRpAGidblO2qEDXLHjVmiS/Ryal0YCM+j2dqNlYUkZYPhVQT8SOg9MgNdaraQbEm5Pf+MTh+G6cyoJwFNeNnZF0htERmqSgcGB4OOkDgD69d6XMtmOGEmk+7YA9OAArFU2y6ai5m5e7iw2/myJsMuXblfVFO2JhJeCDbxQIODNlpkWWo5bCJlNCtX0P9vm+0Lraz4e1nRp3qT5ZIklir6n6wfWFU22e6YT2IxPPP/+9EWgabxg==
- Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
- Delivery-date: Thu, 30 Jan 2025 20:17:23 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
Hi Teddy,
Thanks for working on this. I'm curious about your plans for this:
On 2025-01-21 11:13, Teddy Astie wrote:
+/**
+ * IOMMU_alloc_nested
+ * Create a nested IOMMU context (needs IOMMUCAP_nested).
+ *
+ * This context uses a platform-specific page table from domain address space
+ * specified in pgtable_gfn and use it for nested translations.
+ *
+ * Explicit flushes needs to be submited with IOMMU_flush_nested on
+ * modification of the nested pagetable to ensure coherency between IOTLB and
+ * nested page table.
+ *
+ * This context can be destroyed using IOMMU_free_context.
+ * This context cannot be modified using map_pages, unmap_pages.
+ */
+struct pv_iommu_alloc_nested {
+ /* OUT: allocated IOMMU context number */
+ uint16_t ctx_no;
+
+ /* IN: guest frame number of the nested page table */
+ uint64_aligned_t pgtable_gfn;
+
+ /* IN: nested mode flags */
+ uint64_aligned_t nested_flags;
+};
+typedef struct pv_iommu_alloc_nested pv_iommu_alloc_nested_t;
+DEFINE_XEN_GUEST_HANDLE(pv_iommu_alloc_nested_t);
Is this command intended to be used for GVA -> GPA translation? Would
you need some way to associate with another iommu context for GPA -> HPA
translation?
Maybe more broadly, what are your goals for enabling PV-IOMMU? The
examples on your blog post cover a domain restrict device access to
specific portions of the the GPA space. Are you also interested in GVA
-> GPA? Does VFIO required GVA -> GPA?
And, sorry to bike shed, but ctx_no reads like "Context No" to me. I
think ctxid/ctx_id might be clearer. Others probably have their own
opinions :)
Thanks,
Jason
|