[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] xen/arm: fix arm_iommu_map_page after f9f6b22ab



Hi Stewart,

On 15/07/2025 16:58, Stewart Hildebrand wrote:
On 7/14/25 18:47, Julien Grall wrote:
Hi Stewards,

On 14/07/2025 22:12, Stewart Hildebrand wrote:
On 7/12/25 06:08, Julien Grall wrote:
Hi Stefano,

On 11/07/2025 01:25, Stefano Stabellini wrote:
diff --git a/xen/drivers/passthrough/arm/iommu_helpers.c 
b/xen/drivers/passthrough/arm/iommu_helpers.c
index 5cb1987481..dae5fc0caa 100644
--- a/xen/drivers/passthrough/arm/iommu_helpers.c
+++ b/xen/drivers/passthrough/arm/iommu_helpers.c
@@ -36,17 +36,6 @@ int __must_check arm_iommu_map_page(struct domain *d, dfn_t 
dfn, mfn_t mfn,
    {
        p2m_type_t t;
   -    /*
-     * Grant mappings can be used for DMA requests. The dev_bus_addr
-     * returned by the hypercall is the MFN (not the IPA). For device
-     * protected by an IOMMU, Xen needs to add a 1:1 mapping in the domain
-     * p2m to allow DMA request to work.
-     * This is only valid when the domain is directed mapped. Hence this
-     * function should only be used by gnttab code with gfn == mfn == dfn.
-     */
-    BUG_ON(!is_domain_direct_mapped(d));
-    BUG_ON(mfn_x(mfn) != dfn_x(dfn));
-

Shouldn't arm_iommu_unmap_page() also be updated? It would not result to a 
crash, but we would not be able to
remove the mapping.

f9f6b22abf1d didn't add any calls to iommu_unmap(). As this is still
only hwdom for now, hwdom is not expected to be destroyed, so the
mapping is not expected to be removed for now.

I already gathered that by looking at the fixes tag in my previous answer. 
Maybe I should have been clearer at that point. Even though iommu_unmap() is 
not called today, this is meant to be the reverse of what was done by 
iommu_map(). So it looks very odd to update one but not the other.

Furthermore, AFAIU, this patch is going a bit further than just fixing the bug 
introduced by f9f6b22abf1d. At which point, we should properly
fix it in the same patch rather than hoping that someone else will remember 
that this needed be updated.

I'd like to suggest splitting this into two patches then, so we don't
let preparation for future work get in the way of fixing the reported
issue:

Patch #1 to fix the reported issue with a simple
s/is_domain_direct_mapped/domain_use_host_layout/ in both
arm_iommu_map_page() and arm_iommu_unmap_page().

Patch #2 to allow translated mappings in preparation for future work.

This sounds good to me.



With that said, in the future when we expose vITS to domU, you'd be
right. In the xlnx downstream we have something like this:

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ef8bd4b6ab33..2f5b79279ff9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -133,7 +133,8 @@ static inline int p2m_remove_mapping(struct domain *d,
           mfn_t mfn_return = p2m_get_entry(p2m, gfn_add(start_gfn, i), &t, 
NULL,
                                            &cur_order, NULL);
   -        if ( p2m_is_any_ram(t) &&
+        if ( !mfn_eq(mfn, INVALID_MFN) &&
+             p2m_is_any_ram(t) &&

I don't quite understand why you need to update this function. Can you clarify?

Since arm_iommu_unmap_page() doesn't have the mfn, we needed a way to
remove a p2m mapping without the mfn available. INVALID_MFN is a
sentinel/placeholder in lieu of the missing mfn.

Ah, I didn't spot you changed the MFN passed in guest_physmap_remove_page() below. Hmmm... The code in p2m_remove_mapping() is checking MFN to avoid any race. IIRC this is to close a race in the grant-table mapping.

So I am a bit uncomfortable to allow bypassing the check when INVALID_MFN is passed. Looking at the code, I see the check is also gated with p2m_is_any_ram(). I would argue that none of the IOMMU mapping we are creating should be considered as RAM (the grant mapping is arguable, but definitely not the doorbell). So I think it would be better to use a different p2m type for the IOMMU mapping.

Cheers,

--
Julien Grall




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.