[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH 04/37] xen: introduce an arch helper for default dma zone status
Hi Jan, > -----Original Message----- > From: Jan Beulich <jbeulich@xxxxxxxx> > Sent: 2022年1月18日 22:16 > To: Wei Chen <Wei.Chen@xxxxxxx> > Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; xen- > devel@xxxxxxxxxxxxxxxxxxxx; sstabellini@xxxxxxxxxx; julien@xxxxxxx > Subject: Re: [PATCH 04/37] xen: introduce an arch helper for default dma > zone status > > On 18.01.2022 10:20, Wei Chen wrote: > >> From: Jan Beulich <jbeulich@xxxxxxxx> > >> Sent: 2022年1月18日 16:16 > >> > >> On 18.01.2022 08:51, Wei Chen wrote: > >>>> From: Jan Beulich <jbeulich@xxxxxxxx> > >>>> Sent: 2022年1月18日 0:11 > >>>> On 23.09.2021 14:02, Wei Chen wrote: > >>>>> In current code, when Xen is running in a multiple nodes NUMA > >>>>> system, it will set dma_bitsize in end_boot_allocator to reserve > >>>>> some low address memory for DMA. > >>>>> > >>>>> There are some x86 implications in current implementation. Becuase > >>>>> on x86, memory starts from 0. On a multiple nodes NUMA system, if > >>>>> a single node contains the majority or all of the DMA memory. x86 > >>>>> prefer to give out memory from non-local allocations rather than > >>>>> exhausting the DMA memory ranges. Hence x86 use dma_bitsize to set > >>>>> aside some largely arbitrary amount memory for DMA memory ranges. > >>>>> The allocations from these memory ranges would happen only after > >>>>> exhausting all other nodes' memory. > >>>>> > >>>>> But the implications are not shared across all architectures. For > >>>>> example, Arm doesn't have these implications. So in this patch, we > >>>>> introduce an arch_have_default_dmazone helper for arch to determine > >>>>> that it need to set dma_bitsize for reserve DMA allocations or not. > >>>> > >>>> How would Arm guarantee availability of memory below a certain > >>>> boundary for limited-capability devices? Or is there no need > >>>> because there's an assumption that I/O for such devices would > >>>> always pass through an IOMMU, lifting address size restrictions? > >>>> (I guess in a !PV build on x86 we could also get rid of such a > >>>> reservation.) > >>> > >>> On Arm, we still can have some devices with limited DMA capability. > >>> And we also don't force all such devices to use IOMMU. This devices > >>> will affect the dma_bitsize. Like RPi platform, it sets its > dma_bitsize > >>> to 30. But in multiple NUMA nodes system, Arm doesn't have a default > >>> DMA zone. Multiple nodes is not a constraint on dma_bitsize. And some > >>> previous discussions are placed here [1]. > >> > >> I'm afraid that doesn't give me more clues. For example, in the mail > >> being replied to there I find "That means, only first 4GB memory can > >> be used for DMA." Yet that's not an implication from setting > >> dma_bitsize. DMA is fine to occur to any address. The special address > >> range is being held back in case in particular Dom0 is in need of such > >> a range to perform I/O to _some_ devices. > > > > I am sorry that my last reply hasn't given you more clues. On Arm, only > > Dom0 can have DMA without IOMMU. So when we allocate memory for Dom0, > > we're trying to allocate memory under 4GB or in the range of dma_bitsize > > indicated. I think these operations meet your above Dom0 special address > > range description. As we have already allocated memory for DMA, so I > > think we don't need a DMA zone in page allocation. I am not sure is that > > answers your earlier question? > > I view all of this as flawed, or as a workaround at best. Xen shouldn't > make assumptions on what Dom0 may need. Instead Dom0 should make > arrangements such that it can do I/O to/from all devices of interest. > This may involve arranging for address restricted buffers. And for this > to be possible, Xen would need to have available some suitable memory. > I understand this is complicated by the fact that despite being HVM-like, > due to the lack of an IOMMU in front of certain devices address > restrictions on Dom0 address space alone (i.e. without any Xen > involvement) won't help ... > I agree with you that the current implementation is probably the best kind of workaround. Do you have some suggestions for this patch to address above comments? Or should I just need to modify the commit log to contain some of our above discussions? Thanks, Wei Chen > Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |