|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v7 17/19] xen/riscv: add support of page lookup by GFN
On 16.12.2025 17:55, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -1057,3 +1057,187 @@ int map_regions_p2mt(struct domain *d,
>
> return rc;
> }
> +
> +/*
> + * p2m_get_entry() should always return the correct order value, even if an
> + * entry is not present (i.e. the GFN is outside the range):
> + * [p2m->lowest_mapped_gfn, p2m->max_mapped_gfn] (1)
> + *
> + * This ensures that callers of p2m_get_entry() can determine what range of
> + * address space would be altered by a corresponding p2m_set_entry().
> + * Also, it would help to avoid costly page walks for GFNs outside range (1).
> + *
> + * Therefore, this function returns true for GFNs outside range (1), and in
> + * that case the corresponding level is returned via the level_out argument.
> + * Otherwise, it returns false and p2m_get_entry() performs a page walk to
> + * find the proper entry.
> + */
> +static bool check_outside_boundary(const struct p2m_domain *p2m, gfn_t gfn,
> + gfn_t boundary, bool is_lower,
> + unsigned int *level_out)
> +{
> + unsigned int level = P2M_ROOT_LEVEL(p2m);
> + bool ret = false;
> +
> + ASSERT(p2m);
> +
> + if ( is_lower ? gfn_x(gfn) < gfn_x(boundary)
> + : gfn_x(gfn) > gfn_x(boundary) )
> + {
> + for ( ; level; level-- )
> + {
> + unsigned long mask = BIT(P2M_GFN_LEVEL_SHIFT(level), UL) - 1;
> + unsigned long masked_gfn;
> +
> + if ( is_lower )
> + masked_gfn = gfn_x(gfn) | mask;
> + else
> + masked_gfn = gfn_x(gfn) & ~mask;
> +
> + if ( is_lower ? masked_gfn < gfn_x(boundary)
> + : masked_gfn > gfn_x(boundary) )
> + break;
Having two is_lower conditionals here is imo unhelpful. Likely the compiler
would manage to fold them, but imo
if ( is_lower ? (gfn_x(gfn) | mask) < gfn_x(boundary)
: (gfn_x(gfn) & ~mask) > gfn_x(boundary) )
break;
would be more clear to the reader as well. I'm not going to insist, though.
> + }
> +
> + ret = true;
> + }
> +
> + if ( level_out )
> + *level_out = level;
> +
> + return ret;
> +}
> +
> +/*
> + * Get the details of a given gfn.
> + *
> + * If the entry is present, the associated MFN, the p2m type of the mapping,
> + * and the page order of the mapping in the page table (i.e., it could be a
> + * superpage) will be returned.
> + *
> + * If the entry is not present, INVALID_MFN will be returned, page_order will
> + * be set according to the order of the invalid range, and the type will be
> + * p2m_invalid.
> + */
> +static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
> + p2m_type_t *t,
> + unsigned int *page_order)
> +{
> + unsigned int level = 0;
> + pte_t entry, *table;
> + int rc;
> + mfn_t mfn = INVALID_MFN;
> + P2M_BUILD_LEVEL_OFFSETS(p2m, offsets, gfn_to_gaddr(gfn));
> +
> + ASSERT(p2m_is_locked(p2m));
> +
> + *t = p2m_invalid;
> +
> + if ( gfn_x(gfn) > (BIT(PADDR_BITS - PAGE_SHIFT + 1, UL) - 1) )
> + return mfn;
Since on all other error paths you set *page_order (as long as the pointer
is non-NULL), shouldn't you do so here as well (to the order corresponding
to the full [2nd-level] address space)?
Furthermore, is PADDR_BITS really the right basis? Don't things rather depend
on the number of levels the 2nd-level page tables have for the given guest?
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |