[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 6/9] xen/arm: introduce alloc_staticmem_pages and alloc_domstatic_pages
On 06.07.2021 07:58, Penny Zheng wrote: >> From: Jan Beulich <jbeulich@xxxxxxxx> >> Sent: Thursday, June 10, 2021 6:23 PM >> >> On 07.06.2021 04:43, Penny Zheng wrote: >>> --- a/xen/common/page_alloc.c >>> +++ b/xen/common/page_alloc.c >>> @@ -1065,6 +1065,75 @@ static struct page_info *alloc_heap_pages( >>> return pg; >>> } >>> >>> +#ifdef CONFIG_STATIC_ALLOCATION >>> +/* >>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static memory. >>> + * It is the equivalent of alloc_heap_pages for static memory */ >>> +static struct page_info *alloc_staticmem_pages(unsigned long nr_mfns, >>> + mfn_t smfn, >>> + unsigned int memflags) >>> +{ >>> + bool need_tlbflush = false; >>> + uint32_t tlbflush_timestamp = 0; >>> + unsigned long i; >>> + struct page_info *pg; >>> + >>> + /* For now, it only supports allocating at specified address. */ >>> + if ( !mfn_valid(smfn) || !nr_mfns ) >>> + { >>> + printk(XENLOG_ERR >>> + "Invalid %lu static memory starting at %"PRI_mfn"\n", >> >> Reading a log containing e.g. "Invalid 0 static memory starting at ..." I >> don't >> think I would recognize that the "0" is the count of pages. > > Sure. How about "try to allocate out of range page %"PRI_mfn"\n"? This still doesn't convey _both_ parts of the if() that would cause the log message to be issued. >>> + nr_mfns, mfn_x(smfn)); >>> + return NULL; >>> + } >>> + pg = mfn_to_page(smfn); >>> + >>> + for ( i = 0; i < nr_mfns; i++ ) >>> + { >>> + /* >>> + * Reference count must continuously be zero for free pages >>> + * of static memory(PGC_reserved). >>> + */ >>> + ASSERT(pg[i].count_info & PGC_reserved); >> >> What logic elsewhere guarantees that this will hold? ASSERT()s are to verify >> that assumptions are met. But I don't think you can sensibly assume the >> caller >> knows the range is reserved (and free), or else you could get away without >> any >> allocation function. > > The caller shall only call alloc_staticmem_pages when it knows range is > reserved, > like, alloc_staticmem_pages is only called in the context of > alloc_domstatic_pages > for now. If the caller knows the static ranges, this isn't really "allocation". I.e. I then question the need for "allocating" in the first place. >>> + if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free ) >>> + { >>> + printk(XENLOG_ERR >>> + "Reference count must continuously be zero for free >>> pages" >>> + "pg[%lu] MFN %"PRI_mfn" c=%#lx t=%#x\n", >>> + i, mfn_x(page_to_mfn(pg + i)), >>> + pg[i].count_info, pg[i].tlbflush_timestamp); >>> + BUG(); >>> + } >> >> The same applies here at least until proper locking gets added, which I >> guess is >> happening in the next patch when really it would need to happen right here. >> > > Ok, I will combine two commits together, and add locking here. > I thought the content of this commit is a little bit too much, so maybe > adding the proper lock shall be created as a new patch. 😉 > >> Furthermore I don't see why you don't fold ASSERT() and if into >> >> if ( pg[i].count_info != (PGC_state_free | PGC_reserved) ) >> >> After all PGC_reserved is not similar to PGC_need_scrub, which >> alloc_heap_pages() masks out the way you also have it here. >> > > I understand that you prefer if condition is phrased as follows: > if ( pg[i].count_info != (PGC_state_free | PGC_reserved) ) > Agree that PGC_reserved shall has the same position as PGC_state_free. > Hmmm, for why I don't fold ASSERT(), do you mean that > ASSERT(pg[i].count_info == (PGC_state_free | PGC_reserved))? No. By converting to the suggested construct the ASSERT() disappears by way of folding _into_ the if(). >> As to the printk() - the extra verbosity compared to the original isn't >> helpful or >> necessary imo. The message needs to be distinguishable from the other one, >> yes, so it would better mention "static" in some way. But the prefix you >> have is >> too long for my taste (and lacks a separating blank anyway). >> > > If you don't like the extra verbosity, maybe just > " Static pg[%lu] MFN %"PRI_mfn" c=%#lx t=%#x.\n"? Something along these lines, yes, but I wonder how difficult it is to take the original message and insert "static" at a suitable place. Any part you omit would again want justifying. Personally I'd go with "pg[%u] static MFN %"PRI_mfn" c=%#lx o=%u v=%#lx t=%#x\n" unless any of the parts are provably pointless to log for static pages. >>> @@ -2434,6 +2512,57 @@ struct page_info *alloc_domheap_pages( >>> return pg; >>> } >>> >>> +#ifdef CONFIG_STATIC_ALLOCATION >>> +/* >>> + * Allocate nr_mfns contiguous pages, starting at #smfn, of static >>> +memory, >>> + * then assign them to one specific domain #d. >>> + * It is the equivalent of alloc_domheap_pages for static memory. >>> + */ >>> +struct page_info *alloc_domstatic_pages( >>> + struct domain *d, unsigned long nr_mfns, mfn_t smfn, >>> + unsigned int memflags) >>> +{ >>> + struct page_info *pg = NULL; >>> + unsigned long dma_size; >>> + >>> + ASSERT(!in_irq()); >>> + >>> + if ( !dma_bitsize ) >>> + memflags &= ~MEMF_no_dma; >>> + else >>> + { >>> + if ( (dma_bitsize - PAGE_SHIFT) > 0 ) >>> + { >>> + dma_size = 1ul << (dma_bitsize - PAGE_SHIFT); >>> + /* Starting address shall meet the DMA limitation. */ >>> + if ( mfn_x(smfn) < dma_size ) >>> + return NULL; >> >> I think I did ask this on v1 already: Why the first page? Static memory >> regions, >> unlike buddy allocator zones, can cross power-of-2 address boundaries. Hence >> it ought to be the last page that gets checked for fitting address width >> restriction requirements. >> >> And then - is this necessary at all? Shouldn't "pre-defined by configuration >> using physical address ranges" imply the memory designated for a guest fits >> its >> DMA needs? >> > > Hmmm, In my understanding, here is the DMA restriction when using buddy > allocator: > else if ( (dma_zone = bits_to_zone(dma_bitsize)) < zone_hi ) > pg = alloc_heap_pages(dma_zone + 1, zone_hi, order, memflags, d); > dma_zone is restricting the starting buddy allocator zone, so I am thinking > that here, it shall > restrict the first page. > > imo, if let user define, it also could be missing DMA restriction? Did you read my earlier reply? Again: The difference is that ordinary allocations (buddies) can't cross zone boundaries. Hence it is irrelevant if you check DMA properties on the first or last page - both will have the same number of significant bits. The same is - afaict - not true for static allocation ranges. Of course, as expressed before, a question is whether DMA suitability needs checking in the first place for static allocations: I'd view it as mis-configuration if a domain was provided memory it can't really use properly. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |