 
	
| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH 04/37] xen: introduce an arch helper for default dma zone status
 Hi Jan,
> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 2022年1月18日 0:11
> To: Wei Chen <Wei.Chen@xxxxxxx>
> Cc: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx; sstabellini@xxxxxxxxxx; julien@xxxxxxx
> Subject: Re: [PATCH 04/37] xen: introduce an arch helper for default dma
> zone status
> 
> I realize this series has been pending for a long time, but I don't
> recall any indication that it would have been dropped. Hence as a
> first try, a few comments on this relatively simple change. I'm
> sorry it to have taken so long to get to it.
> 
Thanks for reviewing this series and lift it up. We are still
working on this series and will send a new version soon.
> On 23.09.2021 14:02, Wei Chen wrote:
> > In current code, when Xen is running in a multiple nodes NUMA
> > system, it will set dma_bitsize in end_boot_allocator to reserve
> > some low address memory for DMA.
> >
> > There are some x86 implications in current implementation. Becuase
> > on x86, memory starts from 0. On a multiple nodes NUMA system, if
> > a single node contains the majority or all of the DMA memory. x86
> > prefer to give out memory from non-local allocations rather than
> > exhausting the DMA memory ranges. Hence x86 use dma_bitsize to set
> > aside some largely arbitrary amount memory for DMA memory ranges.
> > The allocations from these memory ranges would happen only after
> > exhausting all other nodes' memory.
> >
> > But the implications are not shared across all architectures. For
> > example, Arm doesn't have these implications. So in this patch, we
> > introduce an arch_have_default_dmazone helper for arch to determine
> > that it need to set dma_bitsize for reserve DMA allocations or not.
> 
> How would Arm guarantee availability of memory below a certain
> boundary for limited-capability devices? Or is there no need
> because there's an assumption that I/O for such devices would
> always pass through an IOMMU, lifting address size restrictions?
> (I guess in a !PV build on x86 we could also get rid of such a
> reservation.)
On Arm, we still can have some devices with limited DMA capability.
And we also don't force all such devices to use IOMMU. This devices
will affect the dma_bitsize. Like RPi platform, it sets its dma_bitsize
to 30. But in multiple NUMA nodes system, Arm doesn't have a default
DMA zone. Multiple nodes is not a constraint on dma_bitsize. And some
previous discussions are placed here [1].
> 
> > --- a/xen/arch/x86/numa.c
> > +++ b/xen/arch/x86/numa.c
> > @@ -371,6 +371,11 @@ unsigned int __init arch_get_dma_bitsize(void)
> >                   + PAGE_SHIFT, 32);
> >  }
> >
> > +unsigned int arch_have_default_dmazone(void)
> > +{
> > +    return ( num_online_nodes() > 1 ) ? 1 : 0;
> > +}
> 
> According to the expression and ...
> 
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -1889,7 +1889,7 @@ void __init end_boot_allocator(void)
> >      }
> >      nr_bootmem_regions = 0;
> >
> > -    if ( !dma_bitsize && (num_online_nodes() > 1) )
> > +    if ( !dma_bitsize && arch_have_default_dmazone() )
> >          dma_bitsize = arch_get_dma_bitsize();
> 
> ... the use site, you mean the function to return boolean. Please
> indicate so by making it have a return type of "bool". Independent
> of that you don't need a conditional expression above, nor
> (malformed) use of parentheses. I further wonder whether ...
> 
I will fix them in next version. But I am not very clear about
this comment "of that you don't need a conditional expression above",
The "above" indicates this line:
"return ( num_online_nodes() > 1 ) ? 1 : 0;"?
> > --- a/xen/include/asm-arm/numa.h
> > +++ b/xen/include/asm-arm/numa.h
> > @@ -25,6 +25,11 @@ extern mfn_t first_valid_mfn;
> >  #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
> >  #define __node_distance(a, b) (20)
> >
> > +static inline unsigned int arch_have_default_dmazone(void)
> > +{
> > +    return 0;
> > +}
> 
> ... like this one, x86'es couldn't be inline as well. If indeed
> it can't be, making it a macro may still be better (and avoid a
> further comment regarding the lack of __init).
Ok, that would be better, I will do it in next version.
> 
> Jan
[1] https://lists.xenproject.org/archives/html/xen-devel/2021-08/msg00772.html
 
 
 | 
|  | Lists.xenproject.org is hosted with RackSpace, monitoring our |