[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC] xen/swiotlb: avoid arch_sync_dma_* on per-device DMA memory



+Michal

On Wed, 15 Apr 2026, Peng Fan (OSS) wrote:
> From: Peng Fan <peng.fan@xxxxxxx>
> 
> On ARM64, arch_sync_dma_for_{cpu,device}() assumes that the
> physical address passed in refers to normal RAM that is part of the
> kernel linear(direct) mapping, as it unconditionally derives a CPU
> virtual address via phys_to_virt().
> 
> With Xen swiotlb, devices may use per-device coherent DMA memory,
> such as reserved-memory regions described by 'shared-dma-pool',
> which are assigned to dev->dma_mem. These regions may be marked
> no-map in DT and therefore are not part of the kernel linear map.
> In such cases, pfn_valid() still returns true, but phys_to_virt()
> is not valid and cache maintenance via arch_sync_dma_* will fault.
> 
> Prevent this by excluding devices with a private DMA memory pool
> (dev->dma_mem) from the arch_sync_dma_* fast path, and always
> fall back to xen_dma_sync_* for those devices to avoid invalid
> phys_to_virt() conversions for no-map DMA memory while preserving the
> existing fast path for normal, linear-mapped RAM.

This might not work either: the Xen side implementation is
xen/common/grant_table.c:_cache_flush.

Could you please check? From looking at the code,
page_get_owner_and_reference might return NULL for pages part of
reserved-memory regions marked as no-map.

In which case, the Xen hypercall should return -EPERM.



> Signed-off-by: Peng Fan <peng.fan@xxxxxxx>
> ---
>  drivers/xen/swiotlb-xen.c | 17 +++++++++++++----
>  1 file changed, 13 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 
> 2cbf2b588f5b20cfbf9e83a8339dc22092c9559a..b1445df99d9a8f1d18a83b8c413bada6e5579209
>  100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -195,6 +195,11 @@ xen_swiotlb_free_coherent(struct device *dev, size_t 
> size, void *vaddr,
>  }
>  #endif /* CONFIG_X86 */
>  
> +static inline bool dev_has_private_dma_pool(struct device *dev)
> +{
> +     return dev && dev->dma_mem;
> +}
> +
>  /*
>   * Map a single buffer of the indicated size for DMA in streaming mode.  The
>   * physical address to use is returned.
> @@ -262,7 +267,8 @@ static dma_addr_t xen_swiotlb_map_phys(struct device 
> *dev, phys_addr_t phys,
>  
>  done:
>       if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
> -             if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr)))) {
> +             if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr))) &&
> +                 !dev_has_private_dma_pool(dev)) {
>                       arch_sync_dma_for_device(phys, size, dir);
>                       arch_sync_dma_flush();
>               } else {
> @@ -289,7 +295,8 @@ static void xen_swiotlb_unmap_phys(struct device *hwdev, 
> dma_addr_t dev_addr,
>       BUG_ON(dir == DMA_NONE);
>  
>       if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
> -             if (pfn_valid(PFN_DOWN(dma_to_phys(hwdev, dev_addr)))) {
> +             if (pfn_valid(PFN_DOWN(dma_to_phys(hwdev, dev_addr))) &&
> +                 !dev_has_private_dma_pool(hwdev)) {
>                       arch_sync_dma_for_cpu(paddr, size, dir);
>                       arch_sync_dma_flush();
>               } else {
> @@ -312,7 +319,8 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, 
> dma_addr_t dma_addr,
>       struct io_tlb_pool *pool;
>  
>       if (!dev_is_dma_coherent(dev)) {
> -             if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr)))) {
> +             if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))) &&
> +                 !dev_has_private_dma_pool(dev)) {
>                       arch_sync_dma_for_cpu(paddr, size, dir);
>                       arch_sync_dma_flush();
>               } else {
> @@ -337,7 +345,8 @@ xen_swiotlb_sync_single_for_device(struct device *dev, 
> dma_addr_t dma_addr,
>               __swiotlb_sync_single_for_device(dev, paddr, size, dir, pool);
>  
>       if (!dev_is_dma_coherent(dev)) {
> -             if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr)))) {
> +             if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))) &&
> +                 !dev_has_private_dma_pool(dev)) {
>                       arch_sync_dma_for_device(paddr, size, dir);
>                       arch_sync_dma_flush();
>               } else {
> 
> ---
> base-commit: 66672af7a095d89f082c5327f3b15bc2f93d558e
> change-id: 20260415-xen-swiotlb-34a198b6c1d6
> 
> Best regards,
> -- 
> Peng Fan <peng.fan@xxxxxxx>
> 



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.