[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 1/3] xen/riscv: introduce setup_mm()
On 08.11.2024 13:51, Oleksii Kurochko wrote: > @@ -37,9 +42,9 @@ static inline void *maddr_to_virt(paddr_t ma) > */ > static inline unsigned long virt_to_maddr(unsigned long va) > { > - if ((va >= DIRECTMAP_VIRT_START) && > + if ((va >= directmap_virt_start) && Is this a valid / necessary change to make? Right now there looks to be nothing immediately below the directmap, yet that would need guaranteeing (e.g. by some BUILD_BIG_ON() or whatever else) if code builds upon that. > (va < (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE))) > - return directmapoff_to_maddr(va - DIRECTMAP_VIRT_START); > + return directmapoff_to_maddr(va - directmap_virt_start); FTAOD - no question about this part of the change. > @@ -423,3 +429,140 @@ void * __init early_fdt_map(paddr_t fdt_paddr) > > return fdt_virt; > } > + > +vaddr_t __ro_after_init directmap_virt_start = DIRECTMAP_VIRT_START; > + > +struct page_info *__ro_after_init frametable_virt_start; As for directmap_virt_start - perhaps better with initializer? > +#ifndef CONFIG_RISCV_32 > + > +/* Map a frame table to cover physical addresses ps through pe */ > +static void __init setup_frametable_mappings(paddr_t ps, paddr_t pe) > +{ > + paddr_t aligned_ps = ROUNDUP(ps, PAGE_SIZE); > + paddr_t aligned_pe = ROUNDDOWN(pe, PAGE_SIZE); > + unsigned long nr_mfns = PFN_DOWN(aligned_pe - aligned_ps); > + unsigned long frametable_size = nr_mfns * sizeof(*frame_table); > + mfn_t base_mfn; > + > + if ( !frametable_virt_start ) > + frametable_virt_start = frame_table - paddr_to_pfn(aligned_ps); If you make this conditional, then you need an "else" (or something that's effectively one) just like you have in setup_directmap_mappings(). Like for the earlier assumption on ps being zero: Assumptions you make on how a function is used want to at least be self-consistent. I.e. here either you assume the function may be called more than once, or you don't. > +static void __init setup_directmap_mappings(unsigned long base_mfn, > + unsigned long nr_mfns) > +{ > + static mfn_t __initdata directmap_mfn_start = INVALID_MFN_INITIALIZER; > + > + unsigned long base_addr = mfn_to_maddr(_mfn(base_mfn)); Seeing this and ... > + unsigned long high_bits_mask = XEN_PT_LEVEL_MAP_MASK(HYP_PT_ROOT_LEVEL); > + > + /* First call sets the directmap physical and virtual offset. */ > + if ( mfn_eq(directmap_mfn_start, INVALID_MFN) ) > + { > + directmap_mfn_start = _mfn(base_mfn); ... this (and more further down) - perhaps better to have the function take mfn_t right away? > + /* > + * The base address may not be aligned to the second level > + * size in case of Sv39 (e.g. 1GB when using 4KB pages). > + * This would prevent superpage mappings for all the regions > + * because the virtual address and machine address should > + * both be suitably aligned. > + * > + * Prevent that by offsetting the start of the directmap virtual > + * address. > + */ > + directmap_virt_start -= > + (base_addr & high_bits_mask) + (base_addr & ~high_bits_mask); Isn't this the same as directmap_virt_start -= base_addr; i.e. no different from what you had a few revisions back? I continue to think that only the low bits matter for the offsetting. > + } > + > + if ( base_mfn < mfn_x(directmap_mfn_start) ) > + panic("can't add directmap mapping at %#lx below directmap start > %#lx\n", > + base_mfn, mfn_x(directmap_mfn_start)); > + > + if ( map_pages_to_xen((vaddr_t)mfn_to_virt(base_mfn), > + _mfn(base_mfn), nr_mfns, > + PAGE_HYPERVISOR_RW) ) > + panic("Directmap mappings for [%#"PRIpaddr", %#"PRIpaddr") failed\n", > + mfn_to_maddr(_mfn(base_mfn)), > + mfn_to_maddr(_mfn(base_mfn + nr_mfns))); Maybe worth also logging the error code? > +void __init setup_mm(void) > +{ > + const struct membanks *banks = bootinfo_get_mem(); > + paddr_t ram_start = INVALID_PADDR; > + paddr_t ram_end = 0; > + paddr_t ram_size = 0; > + unsigned int i; > + > + /* > + * We need some memory to allocate the page-tables used for the directmap > + * mappings. But some regions may contain memory already allocated > + * for other uses (e.g. modules, reserved-memory...). > + * > + * For simplicity, add all the free regions in the boot allocator. > + */ > + populate_boot_allocator(); > + > + for ( i = 0; i < banks->nr_banks; i++ ) > + { > + const struct membank *bank = &banks->bank[i]; > + paddr_t bank_start = ROUNDUP(bank->start, PAGE_SIZE); > + paddr_t bank_end = ROUNDDOWN(bank->start + bank->size, PAGE_SIZE); > + unsigned long bank_size = bank_end - bank_start; > + > + ram_size += bank_size; As before - you maintain ram_size here, ... > + ram_start = min(ram_start, bank_start); > + ram_end = max(ram_end, bank_end); > + > + setup_directmap_mappings(PFN_DOWN(bank_start), PFN_DOWN(bank_size)); > + } > + > + setup_frametable_mappings(ram_start, ram_end); > + max_page = PFN_DOWN(ram_end); > +} ... without ever using the value. Why? Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |