[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v7 17/19] xen/riscv: add support of page lookup by GFN
- To: Jan Beulich <jbeulich@xxxxxxxx>
- From: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
- Date: Thu, 18 Dec 2025 18:25:47 +0100
- Cc: Alistair Francis <alistair.francis@xxxxxxx>, Bob Eshleman <bobbyeshleman@xxxxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
- Delivery-date: Thu, 18 Dec 2025 17:26:06 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On 12/18/25 1:55 PM, Jan Beulich wrote:
On 16.12.2025 17:55, Oleksii Kurochko wrote:
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -1057,3 +1057,187 @@ int map_regions_p2mt(struct domain *d,
return rc;
}
+
+/*
+ * p2m_get_entry() should always return the correct order value, even if an
+ * entry is not present (i.e. the GFN is outside the range):
+ * [p2m->lowest_mapped_gfn, p2m->max_mapped_gfn] (1)
+ *
+ * This ensures that callers of p2m_get_entry() can determine what range of
+ * address space would be altered by a corresponding p2m_set_entry().
+ * Also, it would help to avoid costly page walks for GFNs outside range (1).
+ *
+ * Therefore, this function returns true for GFNs outside range (1), and in
+ * that case the corresponding level is returned via the level_out argument.
+ * Otherwise, it returns false and p2m_get_entry() performs a page walk to
+ * find the proper entry.
+ */
+static bool check_outside_boundary(const struct p2m_domain *p2m, gfn_t gfn,
+ gfn_t boundary, bool is_lower,
+ unsigned int *level_out)
+{
+ unsigned int level = P2M_ROOT_LEVEL(p2m);
+ bool ret = false;
+
+ ASSERT(p2m);
+
+ if ( is_lower ? gfn_x(gfn) < gfn_x(boundary)
+ : gfn_x(gfn) > gfn_x(boundary) )
+ {
+ for ( ; level; level-- )
+ {
+ unsigned long mask = BIT(P2M_GFN_LEVEL_SHIFT(level), UL) - 1;
+ unsigned long masked_gfn;
+
+ if ( is_lower )
+ masked_gfn = gfn_x(gfn) | mask;
+ else
+ masked_gfn = gfn_x(gfn) & ~mask;
+
+ if ( is_lower ? masked_gfn < gfn_x(boundary)
+ : masked_gfn > gfn_x(boundary) )
+ break;
Having two is_lower conditionals here is imo unhelpful. Likely the compiler
would manage to fold them, but imo
if ( is_lower ? (gfn_x(gfn) | mask) < gfn_x(boundary)
: (gfn_x(gfn) & ~mask) > gfn_x(boundary) )
break;
would be more clear to the reader as well. I'm not going to insist, though.
Agree, probably, it would be better to drop masked_gfn and merge is_lower
conditionals.
+ }
+
+ ret = true;
+ }
+
+ if ( level_out )
+ *level_out = level;
+
+ return ret;
+}
+
+/*
+ * Get the details of a given gfn.
+ *
+ * If the entry is present, the associated MFN, the p2m type of the mapping,
+ * and the page order of the mapping in the page table (i.e., it could be a
+ * superpage) will be returned.
+ *
+ * If the entry is not present, INVALID_MFN will be returned, page_order will
+ * be set according to the order of the invalid range, and the type will be
+ * p2m_invalid.
+ */
+static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
+ p2m_type_t *t,
+ unsigned int *page_order)
+{
+ unsigned int level = 0;
+ pte_t entry, *table;
+ int rc;
+ mfn_t mfn = INVALID_MFN;
+ P2M_BUILD_LEVEL_OFFSETS(p2m, offsets, gfn_to_gaddr(gfn));
+
+ ASSERT(p2m_is_locked(p2m));
+
+ *t = p2m_invalid;
+
+ if ( gfn_x(gfn) > (BIT(PADDR_BITS - PAGE_SHIFT + 1, UL) - 1) )
+ return mfn;
Since on all other error paths you set *page_order (as long as the pointer
is non-NULL), shouldn't you do so here as well (to the order corresponding
to the full [2nd-level] address space)?
It makes sense. I am thinking if something like would be fine:
@@ -1123,7 +1117,7 @@ static mfn_t p2m_get_entry(struct p2m_domain *p2m,
gfn_t gfn,
p2m_type_t *t,
unsigned int *page_order)
{
- unsigned int level = 0;
+ unsigned int level = P2M_ROOT_LEVEL(p2m);
pte_t entry, *table;
int rc;
mfn_t mfn = INVALID_MFN;
@@ -1134,7 +1128,13 @@ static mfn_t p2m_get_entry(struct p2m_domain *p2m,
gfn_t gfn,
*t = p2m_invalid;
if ( gfn_x(gfn) > (BIT(PADDR_BITS - PAGE_SHIFT + 1, UL) - 1) )
+ {
+ if ( page_order )
+ *page_order =
+ P2M_LEVEL_ORDER(level + 1) + P2M_ROOT_EXTRA_BITS(p2m, level);
+
return mfn;
+ }
if ( check_outside_boundary(p2m, gfn, p2m->lowest_mapped_gfn, true,
&level) )
@@ -1152,7 +1152,6 @@ static mfn_t p2m_get_entry(struct p2m_domain *p2m,
gfn_t gfn,
if ( !table )
{
ASSERT_UNREACHABLE();
- level = P2M_ROOT_LEVEL(p2m);
goto out;
}
Or it isn't the best one option to define page_order using "non-existing" level?
Furthermore, is PADDR_BITS really the right basis? Don't things rather depend
on the number of levels the 2nd-level page tables have for the given guest?
I think you are right, it depends on the number of levels the 2nd-level page
tables
have for the given guest.
~ Oleksii
|