[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Question about DMA on 1:1 mapping dom0 of arm64
On Sat, 2015-04-18 at 17:23 +0800, Chen Baozi wrote: > On Sat, Apr 18, 2015 at 05:08:58PM +0800, Chen Baozi wrote: > > On Fri, Apr 17, 2015 at 05:13:16PM +0100, Stefano Stabellini wrote: > > > On Fri, 17 Apr 2015, Ian Campbell wrote: > > > > On Fri, 2015-04-17 at 15:34 +0100, Stefano Stabellini wrote: > > > > > > > If I set dom0_mem to a small value (e.g. 512M), which makes all > > > > > > > physical memory > > > > > > > of dom0 below 4G, everything goes fine. > > > > > > > > > > > > So you are getting allocated memory below 4G? > > > > > > > > > > > > You message on IRC suggested you weren't, did you hack around this? > > > > > > > > > > > > I think we have two options, either xen_swiotlb_init allocates pages > > > > > > below 4GB (e.g. __GFP_DMA) or we do something to allow > > > > > > xen_swiotlb_fixup > > > > > > to actually work even on a 1:1 dom0. > > > > > > > > > > I don't think that making xen_swiotlb_fixup work on ARM is a good > > > > > idea: > > > > > it would break the 1:1. > > > > > > > > This would actually work though, I think, because this is the swiotlb so > > > > we definitely have the opportunity to return the actual DMA address > > > > whenever we use this buffer and the device will use it in the right > > > > places for sure. > > > > > > The code is pretty complex as is -- I would rather avoid adding more > > > complexity to it. For example we would need to bring back a mechanism > > > to track dma address -> pseudo-physical address mappings on arm, even > > > though it would be far simpler of course. > > > > > > Also I think it makes sense to use the swiotlb buffer for its original > > > purpose. > > > > > > If we could introduce a mechanism to get a lower than 4G buffer in dom0, > > > but matching the 1:1, I think it would make the maintenance much easier > > > on the linux side. > > > > +1 > > > > Actually, we have already had the mechanism on arm32 to populate at least > > one bank of memory below 4G. Thus, the only thing we have to do on the > > hypervisor side is to make arm32 and arm64 share the same process in > > allocate_memory_11(), removing the 'lowmem = is_32bit_domain(d)' related > > conditions. If this is acceptable, Ah yes, I'd forgotten we already handled this for 32-bit, so enabling it for 64-bit does indeed make sense. However what you do is unfortunately not quite correct, since on a 64-bit system which has no RAM at all under 4GB it will now fail to populate the first bank and error out. I can think of two solutions off hand, either: Either: initialise lowmem to (is_32bit_domain(d) || ram_exists_below_4g()) (which in practice is equivalent to ram_exists_below_4g() since that is always going to be true for a 32-bit system). Or; Allow the initial bank0 filling to also fallback to non-lowmem (on 64-bit only) as the following stuff for subsequent banks does. The first option is tricky because I'm not sure what ram_exists_below_4g() would look like inside, i.e. whether it can be queried from the page allocator or if we would need to make a note of this fact when parsing/settingup memory early on. The second one is tricky because it opens up the possibility of ploughing on if lowmem can't be allocated, even if it is strictly required on the platform. I think on balance the first would be better. > the only thing we need to do in Linux > > kernel is to add the __GFP_DMA flag when allocating pages for > > xen_io_tlb_start > > in xen_swiotlb_init. > > > > This is the hacks I'm using: > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c > index 40a5303..d83a2b1 100644 > --- a/xen/arch/arm/domain_build.c > +++ b/xen/arch/arm/domain_build.c > @@ -264,7 +264,7 @@ static void allocate_memory_11(struct domain *d, struct > kernel_info *kinfo) > unsigned int order = get_11_allocation_size(kinfo->unassigned_mem); > int i; > > - bool_t lowmem = is_32bit_domain(d); > + bool_t lowmem = true; > unsigned int bits; > > printk("Allocating 1:1 mappings totalling %ldMB for dom0:\n", > @@ -279,7 +279,7 @@ static void allocate_memory_11(struct domain *d, struct > kernel_info *kinfo) > */ > while ( order >= min_low_order ) > { > - for ( bits = order ; bits <= (lowmem ? 32 : PADDR_BITS); bits++ ) > + for ( bits = order ; bits <= 32 ; bits++ ) I think leave this as is, even if lowmem ends up being const. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |