[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 09/16] dma-mapping: handle MMIO flow in dma_map|unmap_page
- To: Leon Romanovsky <leon@xxxxxxxxxx>
- From: Jason Gunthorpe <jgg@xxxxxxxxxx>
- Date: Thu, 28 Aug 2025 12:17:30 -0300
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9Gj2EZNRr0DPgaVxIhfrzZTloYAFkHlRSLQ88RnkN/Y=; b=dJXQVVIKxoWlQmbSqqZra4WL6Kz2zfSDPLs05lcSjOU93g6u6C66YMMC8Lt3Gk8wFzCpFvCT5ecVUr69Yz9PvqCgt8QZlH+B83dzj16RIDYQMsoCT04lKTgWQTCPY1rNP6GnD9WyIvwDUgafnL0bPZdm8DCw5ABjQBMBINbCBaiLuf7hCYi52ryeathkd22iFVq6j6zhCh3E5CMa04us5TUS23R+FIpNb8rlNq78JyS1RQyYFtZ99/2WNmCca3aanZ1jIYmPQ/OkZg+Wvu8mWT96SktYVKQjqsH52wQNa9gpzJz/G0n/q5JG6ysHSnvUGNNYmzoeUJKVqVuN27ktMw==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=DeFisRlfdtskQJuokSZMY5Xg0kP2ri832SWOoOxZmFyfr1hQfzKObwTm+dZ/jmzAp7YiYKcameskckoFkgeqeKsJxR2M86hxVxefTFJbbHlbo89kJvMWfHWDSwK/7Xbo0GP+iu1sr0X8F7X9m2WKLIg4t6QznLNFOH/MxS6vCOK3IPW3oyYLUg9yK0etWTT2a+s/dKUyjQ88tELxataJ7E5iLzNbugrf5kkulSCJGU/EvDMBusPgZ8cjI4kdKgZrWBT8VeRErl8D8Ri5s/0qX+BTGAQoUlsRXJ/0BnVyew7Z3j3iM+QlgZObIGJGnjVP1RNDkbX2v1H7YIjLyIISSQ==
- Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com;
- Cc: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx>, Leon Romanovsky <leonro@xxxxxxxxxx>, Abdiel Janulgue <abdiel.janulgue@xxxxxxxxx>, Alexander Potapenko <glider@xxxxxxxxxx>, Alex Gaynor <alex.gaynor@xxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxx>, Danilo Krummrich <dakr@xxxxxxxxxx>, iommu@xxxxxxxxxxxxxxx, Jason Wang <jasowang@xxxxxxxxxx>, Jens Axboe <axboe@xxxxxxxxx>, Joerg Roedel <joro@xxxxxxxxxx>, Jonathan Corbet <corbet@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, kasan-dev@xxxxxxxxxxxxxxxx, Keith Busch <kbusch@xxxxxxxxxx>, linux-block@xxxxxxxxxxxxxxx, linux-doc@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, linux-nvme@xxxxxxxxxxxxxxxxxxx, linuxppc-dev@xxxxxxxxxxxxxxxx, linux-trace-kernel@xxxxxxxxxxxxxxx, Madhavan Srinivasan <maddy@xxxxxxxxxxxxx>, Masami Hiramatsu <mhiramat@xxxxxxxxxx>, Michael Ellerman <mpe@xxxxxxxxxxxxxx>, "Michael S. Tsirkin" <mst@xxxxxxxxxx>, Miguel Ojeda <ojeda@xxxxxxxxxx>, Robin Murphy <robin.murphy@xxxxxxx>, rust-for-linux@xxxxxxxxxxxxxxx, Sagi Grimberg <sagi@xxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Steven Rostedt <rostedt@xxxxxxxxxxx>, virtualization@xxxxxxxxxxxxxxx, Will Deacon <will@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
- Delivery-date: Thu, 28 Aug 2025 15:18:06 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On Tue, Aug 19, 2025 at 08:36:53PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@xxxxxxxxxx>
>
> Extend base DMA page API to handle MMIO flow and follow
> existing dma_map_resource() implementation to rely on dma_map_direct()
> only to take DMA direct path.
I would reword this a little bit too
dma-mapping: implement DMA_ATTR_MMIO for dma_(un)map_page_attrs()
Make dma_map_page_attrs() and dma_map_page_attrs() respect
DMA_ATTR_MMIO.
DMA_ATR_MMIO makes the functions behave the same as dma_(un)map_resource():
- No swiotlb is possible
- Legacy dma_ops arches use ops->map_resource()
- No kmsan
- No arch_dma_map_phys_direct()
The prior patches have made the internl funtions called here support
DMA_ATTR_MMIO.
This is also preparation for turning dma_map_resource() into an inline
calling dma_map_phys(DMA_ATTR_MMIO) to consolidate the flows.
> @@ -166,14 +167,25 @@ dma_addr_t dma_map_page_attrs(struct device *dev,
> struct page *page,
> return DMA_MAPPING_ERROR;
>
> if (dma_map_direct(dev, ops) ||
> - arch_dma_map_phys_direct(dev, phys + size))
> + (!is_mmio && arch_dma_map_phys_direct(dev, phys + size)))
> addr = dma_direct_map_phys(dev, phys, size, dir, attrs);
PPC is the only user of arch_dma_map_phys_direct() and it looks like
it should be called on MMIO memory. Seems like another inconsistency
with map_resource. I'd leave it like the above though for this series.
> else if (use_dma_iommu(dev))
> addr = iommu_dma_map_phys(dev, phys, size, dir, attrs);
> - else
> + else if (is_mmio) {
> + if (!ops->map_resource)
> + return DMA_MAPPING_ERROR;
Probably written like:
if (ops->map_resource)
addr = ops->map_resource(dev, phys, size, dir, attrs);
else
addr = DMA_MAPPING_ERROR;
As I think some of the design here is to run the trace even on the
failure path?
Otherwise looks OK
Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx>
Jason
|