[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 1/3] mm: introduce xvmalloc() et al and use for grant table allocations
On 18.11.2020 20:10, Julien Grall wrote: > On 06/11/2020 07:11, Jan Beulich wrote: >> All of the array allocations in grant_table_init() can exceed a page's >> worth of memory, which xmalloc()-based interfaces aren't really suitable >> for after boot. > > I can see a few reasons why they are not suitable: > - xmalloc() will try to using an order and then throw memory. This is > pretty inneficient. But addressing this inefficiency, while a nice side effect, is not the goal here. > - xmalloc() will allocate physically contiguous memory This aspect matters here only indirectly: What we care about avoiding are runtime allocations of non-zero order. The assumption of how I worded the description is that during boot non-zero- order allocations can typically be fulfilled and hence aren't a (general) problem. > It would be good to clarify which one you refer because none of them are > really a problem only after boot... Given the above, I'm not sure in which way you would see this be clarified. Any suggestions welcome. > One thing to be aware thought is xv*() are going to be more inefficient > because they involve touching the page-tables (at least until the work > to map on-demand the direct map is not merged). In addition, on Arm, > they will also use only 4K mappings (I have a TODO to fix that). > > So I think we will need to be careful when to use xmalloc() vs > xvalloc(). It might be worth outlining that in the documentation of xv*(). The rule is quite simple and the inefficiencies you mention shouldn't matter imo: Post-boot there should not be any implicit allocations of non-zero order. "Implicit" here meaning to still permit e.g. direct alloc_domheap_pages() invocations, making apparent at the call site that the aspect of memory fragmentation was (hopefully) taken into consideration. I'm actually inclined to suggest (down the road) to have _xmalloc() no longer fall back to multi-page allocations post-boot, but instead return NULL. If you think it is really needed, I can add something like "These should be used in preference to xmalloc() et al whenever the size is not known to be constrained to at most a single page" to the comment at the top of the new header file. Where the inefficiencies you mention would imo matter is in (future) decisions whether to use vmalloc() et al vs xvmalloc() et al: If the size _may_ be no more than a page, the latter may want preferring. >> --- a/xen/common/vmap.c >> +++ b/xen/common/vmap.c >> @@ -7,6 +7,7 @@ >> #include <xen/spinlock.h> >> #include <xen/types.h> >> #include <xen/vmap.h> >> +#include <xen/xvmalloc.h> >> #include <asm/page.h> >> >> static DEFINE_SPINLOCK(vm_lock); >> @@ -299,11 +300,29 @@ void *vzalloc(size_t size) >> return p; >> } >> >> -void vfree(void *va) >> +static void _vfree(const void *va, unsigned int pages, enum vmap_region >> type) > > I don't think "unsigned int" is sufficient to cover big size. AFAICT, > this is not in a new problem in this code and seems to be a latent issue > so far. > > However, I feel that it is wrong to introduce a new set of allocation > helpers that contained a flaw fixed in xm*alloc() recently (see commit > cf38b4926e2b "xmalloc: guard against integer overflow"). For _xmalloc() we're talking about bytes (and the guarding you refer to is actually orthogonal to the limiting done by the page allocator, as follows from the description of that change). Here we're talking about pages. I hope it will be decades until we have to consider allocating 16Tb all in one chunk (and we'd need to have large enough vmap virtual address space set aside first). Also note that - the entire vmap.c right now uses unsigned int for page counts, so it would be outright inconsistent to use unsigned long here, - at least on x86 passing around 64-bit function arguments is slightly less efficient than 32-bit ones, and hence I'd prefer to keep it like this. >> --- /dev/null >> +++ b/xen/include/xen/xvmalloc.h >> @@ -0,0 +1,70 @@ >> + >> +#ifndef __XVMALLOC_H__ >> +#define __XVMALLOC_H__ >> + >> +#include <xen/cache.h> >> +#include <xen/types.h> >> + >> +/* >> + * Xen malloc/free-style interface. > > It would be useful to emphase that they should only be used if the > caller does *not* need physically contiguous memory. Actually first of all I shouldn't have copied to comment without editing. I've now made it /* * Xen malloc/free-style interface preferable for allocations possibly * exceeding a page's worth of memory, as long as there's no need to have * physically contiguous memory allocated. */ albeit I'm not sure the "physically discontiguous" really needs pointing out, considering the 'v' in the function names. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |