[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] PV drivers and zero copying
On Tue, 1 Aug 2017, Oleksandr Andrushchenko wrote: > Hi, Stefano! > > On 07/31/2017 11:28 PM, Stefano Stabellini wrote: > > On Mon, 31 Jul 2017, Oleksandr Andrushchenko wrote: > > > 3 Sharing with page exchange (XENMEM_exchange) > > > ============================================== > > > > > > This API was pointed to me by Stefano Stabellini as one of the possible > > > ways > > > to > > > achieve zero copying and share physically contiguous buffers. It is used > > > by > > > x86 > > > SWIOTLB code (xen_create_contiguous_region, [5]), but as per my > > > understanding > > > this API cannot be used on ARM as of now [6]. Conclusion: not an option > > > for > > > ARM > > > at the moment > > Let me elaborate on this. The purpose of XENMEM_exchange is to exchange > > a number of memory pages with an equal number of contiguous memory > > pages, possibly even under 4G. The original purpose of the hypercall was > > to get DMA-able memory. > this is good to know > > > > So far, it has only been used by Dom0 on x86. Dom0 on ARM doesn't need > > it because it is mapped 1:1 by default and device assignment is not > > allowed without an IOMMU. However it should work on ARM too, as the > > implementation is all common code in Xen. > well, according to [6]: > "Currently XENMEM_exchange is not supported on ARM because the steal_page is > left unimplemented. > > However, even if steal_page is implemented, the hypercall can't work for ARM > because: > - Direct mapped domain is not supported > - ARM doesn't have a M2P and therefore usage of mfn_to_gmfn is > invalid" > And what I see at [7] is that it is still EOPNOTSUPP > So, yes, common code is usable for both ARM and x86, but > underlying support for ARM is till not there. > Please correct me if I am wrong here Ops, I forgot about that! Implementing steal_page on ARM is not be a problem, and direct mapped domains are not a concern in this scenario. The issue is mfn_to_gmfn. However, we do not actually need mfn_to_gmfn to implement xen/common/memory.c:memory_exchange as Julien pointed out in http://marc.info/?l=xen-devel&m=145037009127660. Julien, Jan, two years have passed. Do you think we can find a way to make that old series work for everybody? > > Also, looking at the > > implementation (xen/common/memory.c:memory_exchange) it would seem that > > it can be called from a DomU too (but I have never tried). > good > > Thus, if you have a platform without IOMMU and you disabled the IOMMU > > checks in Xen to assign a device to a DomU anyway, then you could use > > this hypercall from DomU to get memory under 4G to be used for DMA with > > this device. > There is no real device assigned to DomU, but a PV frontend > > > > As far as I can tell XENMEM_exchange could help in the design of > > zero-copy PV protocols only to address this specific use case: > > > > - you have a frontend in DomU and a backend in Dom0 > > - pages shared by DomU get mapped in Dom0 and potentially used for DMA > yes, this is crucial for zero copying in my case: DMA > > - the device has under 4G DMA restrictions > > > > Normally Dom0 maps a DomU page, then at the time of using the mapped > > page for DMA it checks whether it is suitable for DMA (under 4G if the > > device requires so). If it is not, Dom0 uses a bounce buffer borrowed > > from the swiotlb. Obviously this introduces one or two memcpys. > > > > Instead, if DomU calls XENMEM_exchange to get memory under 4G, and > > shares one of the pages with Dom0 via PV frontends, then Dom0 wouldn't > > have to use a bounce buffer to do DMA to this page. > > > > Does it make sense? > yes, it does, thank you, but [6], [7] :( > > [7] > https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/arch/arm/mm.c;h=411bab1ea9f7f14789a134056ebff9f68fd4a4c7;hb=a15516c0cf21d7ac84799f1e2e500b0bb22d2300#l1161 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |