[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Question about DMA on 1:1 mapping dom0 of arm64
On Mon, 2015-04-20 at 11:38 +0100, Stefano Stabellini wrote: > On Mon, 20 Apr 2015, Ian Campbell wrote: > > On Mon, 2015-04-20 at 10:58 +0100, Stefano Stabellini wrote: > > > On Sat, 18 Apr 2015, Chen Baozi wrote: > > > > On Fri, Apr 17, 2015 at 05:13:16PM +0100, Stefano Stabellini wrote: > > > > > On Fri, 17 Apr 2015, Ian Campbell wrote: > > > > > > On Fri, 2015-04-17 at 15:34 +0100, Stefano Stabellini wrote: > > > > > > > > > If I set dom0_mem to a small value (e.g. 512M), which makes > > > > > > > > > all physical memory > > > > > > > > > of dom0 below 4G, everything goes fine. > > > > > > > > > > > > > > > > So you are getting allocated memory below 4G? > > > > > > > > > > > > > > > > You message on IRC suggested you weren't, did you hack around > > > > > > > > this? > > > > > > > > > > > > > > > > I think we have two options, either xen_swiotlb_init allocates > > > > > > > > pages > > > > > > > > below 4GB (e.g. __GFP_DMA) or we do something to allow > > > > > > > > xen_swiotlb_fixup > > > > > > > > to actually work even on a 1:1 dom0. > > > > > > > > > > > > > > I don't think that making xen_swiotlb_fixup work on ARM is a good > > > > > > > idea: > > > > > > > it would break the 1:1. > > > > > > > > > > > > This would actually work though, I think, because this is the > > > > > > swiotlb so > > > > > > we definitely have the opportunity to return the actual DMA address > > > > > > whenever we use this buffer and the device will use it in the right > > > > > > places for sure. > > > > > > > > > > The code is pretty complex as is -- I would rather avoid adding more > > > > > complexity to it. For example we would need to bring back a mechanism > > > > > to track dma address -> pseudo-physical address mappings on arm, even > > > > > though it would be far simpler of course. > > > > > > > > > > Also I think it makes sense to use the swiotlb buffer for its original > > > > > purpose. > > > > > > > > > > If we could introduce a mechanism to get a lower than 4G buffer in > > > > > dom0, > > > > > but matching the 1:1, I think it would make the maintenance much > > > > > easier > > > > > on the linux side. > > > > > > > > +1 > > > > > > > > Actually, we have already had the mechanism on arm32 to populate at > > > > least > > > > one bank of memory below 4G. Thus, the only thing we have to do on the > > > > hypervisor side is to make arm32 and arm64 share the same process in > > > > allocate_memory_11(), removing the 'lowmem = is_32bit_domain(d)' related > > > > conditions. If this is acceptable, the only thing we need to do in Linux > > > > kernel is to add the __GFP_DMA flag when allocating pages for > > > > xen_io_tlb_start > > > > in xen_swiotlb_init. > > > > > > Please send out the Linux patch using __GFP_DMA and I'll queue it up. > > > > What happens with __GFP_DMA if no suitable memory is available (i.e. all > > of RAM is >4GB)? > > __get_free_pages would fail and xen_swiotlb_init will try again with a > smaller size and print a warning. > > If no RAM under 4G is available, This is always going to be the case on e.g. X-Gene where all RAM is >4G (starts at 128GB IIRC). IOW just doing it like this is going to break on some arm64 platforms. > xen_swiotlb_init will fail with an > error. However it is probably better to fail explicitly with an error > message than failing with a stack trace at some point down the line when > DMA is actually done. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |