[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH 3/4] mm: Cleanup apply_to_pte_range() routine
Reverse 'create' vs 'mm == &init_mm' conditions and move page table mask modification out of the atomic context. Signed-off-by: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> --- mm/memory.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index fb7b8dc75167..00f253404db5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2884,24 +2884,28 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, pte_fn_t fn, void *data, bool create, pgtbl_mod_mask *mask) { + int err = create ? -ENOMEM : -EINVAL; pte_t *pte, *mapped_pte; - int err = 0; spinlock_t *ptl; - if (create) { - mapped_pte = pte = (mm == &init_mm) ? - pte_alloc_kernel_track(pmd, addr, mask) : - pte_alloc_map_lock(mm, pmd, addr, &ptl); + if (mm == &init_mm) { + if (create) + pte = pte_alloc_kernel_track(pmd, addr, mask); + else + pte = pte_offset_kernel(pmd, addr); if (!pte) - return -ENOMEM; + return err; } else { - mapped_pte = pte = (mm == &init_mm) ? - pte_offset_kernel(pmd, addr) : - pte_offset_map_lock(mm, pmd, addr, &ptl); + if (create) + pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); + else + pte = pte_offset_map_lock(mm, pmd, addr, &ptl); if (!pte) - return -EINVAL; + return err; + mapped_pte = pte; } + err = 0; arch_enter_lazy_mmu_mode(); if (fn) { @@ -2913,12 +2917,14 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, } } while (addr += PAGE_SIZE, addr != end); } - *mask |= PGTBL_PTE_MODIFIED; arch_leave_lazy_mmu_mode(); if (mm != &init_mm) pte_unmap_unlock(mapped_pte, ptl); + + *mask |= PGTBL_PTE_MODIFIED; + return err; } -- 2.45.2
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |