[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 14/17] xen/riscv: implement p2m_next_level()


  • To: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 16 Jul 2025 13:43:44 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Alistair Francis <alistair.francis@xxxxxxx>, Bob Eshleman <bobbyeshleman@xxxxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 16 Jul 2025 11:44:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.07.2025 13:32, Oleksii Kurochko wrote:
> On 7/2/25 10:35 AM, Jan Beulich wrote:
>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/p2m.c
>>> +++ b/xen/arch/riscv/p2m.c
>>> @@ -387,6 +387,17 @@ static inline bool p2me_is_valid(struct p2m_domain 
>>> *p2m, pte_t pte)
>>>       return p2m_type_radix_get(p2m, pte) != p2m_invalid;
>>>   }
>>>   
>>> +/*
>>> + * pte_is_* helpers are checking the valid bit set in the
>>> + * PTE but we have to check p2m_type instead (look at the comment above
>>> + * p2me_is_valid())
>>> + * Provide our own overlay to check the valid bit.
>>> + */
>>> +static inline bool p2me_is_mapping(struct p2m_domain *p2m, pte_t pte)
>>> +{
>>> +    return p2me_is_valid(p2m, pte) && (pte.pte & PTE_ACCESS_MASK);
>>> +}
>> Same question as on the earlier patch - does P2M type apply to intermediate
>> page tables at all? (Conceptually it shouldn't.)
> 
> It doesn't matter whether it is an intermediate page table or a leaf PTE 
> pointing
> to a page — PTE should be valid. Considering that in the current 
> implementation
> it’s possible for PTE.v = 0 but P2M.v = 1, it is better to check P2M.v instead
> of PTE.v.

I'm confused by this reply. If you want to name 2nd level page table entries
P2M - fine (but unhelpful). But then for any memory access there's only one
of the two involved: A PTE (Xen accesses) or a P2M (guest accesses). Hence
how can there be "PTE.v = 0 but P2M.v = 1"?

An intermediate page table entry is something Xen controls entirely. Hence
it has no (guest induced) type.

>>> @@ -492,6 +503,70 @@ static pte_t p2m_entry_from_mfn(struct p2m_domain 
>>> *p2m, mfn_t mfn, p2m_type_t t,
>>>       return e;
>>>   }
>>>   
>>> +/* Generate table entry with correct attributes. */
>>> +static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info 
>>> *page)
>>> +{
>>> +    /*
>>> +     * Since this function generates a table entry, according to "Encoding
>>> +     * of PTE R/W/X fields," the entry's r, w, and x fields must be set to >>> 0
>>> +     * to point to the next level of the page table.
>>> +     * Therefore, to ensure that an entry is a page table entry,
>>> +     * `p2m_access_n2rwx` is passed to `mfn_to_p2m_entry()` as the access 
>>> value,
>>> +     * which overrides whatever was passed as `p2m_type_t` and guarantees 
>>> that
>>> +     * the entry is a page table entry by setting r = w = x = 0.
>>> +     */
>>> +    return p2m_entry_from_mfn(p2m, page_to_mfn(page), p2m_ram_rw, 
>>> p2m_access_n2rwx);
>> Similarly P2M access shouldn't apply to intermediate page tables. (Moot
>> with that, but (ab)using p2m_access_n2rwx would also look wrong: You did
>> read what it means, didn't you?)
> 
> |p2m_access_n2rwx| was chosen not really because of the description mentioned 
> near
> its declaration, but because it sets r=w=x=0, which RISC-V expects for a PTE 
> that
> points to the next-level page table.
> 
> Generally, I agree that P2M access shouldn't be applied to intermediate page 
> tables.
> 
> What I can suggest in this case is to use|p2m_access_rwx| instead 
> of|p2m_access_n2rwx|,

No. p2m_access_* shouldn't come into play here at all. Period. Just like P2M 
types
shouldn't. As per above - intermediate page tables are Xen internal constructs.

> which will ensure that the P2M access type isn't applied 
> when|p2m_entry_from_mfn() |is called, and then, after 
> calling|p2m_entry_from_mfn()|, simply set|PTE.r,w,x=0|.
> So this function will look like:
>      /* Generate table entry with correct attributes. */
>      static pte_t page_to_p2m_table(struct p2m_domain *p2m, struct page_info 
> *page)
>      {
>          /*
>          * p2m_ram_rw is chosen for a table entry as p2m table should be valid
>          * from both P2M and hardware point of view.
>          *
>          * p2m_access_rwx is chosen to restrict access permissions, what mean
>          * do not apply access permission for a table entry
>          */
>          pte_t pte = p2m_pte_from_mfn(p2m, page_to_mfn(page), _gfn(0), 
> p2m_ram_rw,
>                                      p2m_access_rwx);
> 
>          /*
>          * Since this function generates a table entry, according to "Encoding
>          * of PTE R/W/X fields," the entry's r, w, and x fields must be set 
> to 0
>          * to point to the next level of the page table.
>          */
>          pte.pte &= ~PTE_ACCESS_MASK;
> 
>          return pte;
>      }
> 
> Does this make sense? Or would it be better to keep the current version of
> |page_to_p2m_table()| and just improve the comment explaining 
> why|p2m_access_n2rwx |is used for a table entry?

No to both, as per above.

>>> +static struct page_info *p2m_alloc_page(struct domain *d)
>>> +{
>>> +    struct page_info *pg;
>>> +
>>> +    /*
>>> +     * For hardware domain, there should be no limit in the number of 
>>> pages that
>>> +     * can be allocated, so that the kernel may take advantage of the 
>>> extended
>>> +     * regions. Hence, allocate p2m pages for hardware domains from heap.
>>> +     */
>>> +    if ( is_hardware_domain(d) )
>>> +    {
>>> +        pg = alloc_domheap_page(d, MEMF_no_owner);
>>> +        if ( pg == NULL )
>>> +            printk(XENLOG_G_ERR "Failed to allocate P2M pages for 
>>> hwdom.\n");
>>> +    }
>> The comment looks to have been taken verbatim from Arm. Whatever "extended
>> regions" are, does the same concept even exist on RISC-V?
> 
> Initially, I missed that it’s used only for Arm. Since it was mentioned in
> |doc/misc/xen-command-line.pandoc|, I assumed it applied to all architectures.
> But now I see that it’s Arm-specific:: ### ext_regions (Arm)
> 
>>
>> Also, special casing Dom0 like this has benefits, but also comes with a
>> pitfall: If the system's out of memory, allocations will fail. A pre-
>> populated pool would avoid that (until exhausted, of course). If special-
>> casing of Dom0 is needed, I wonder whether ...
>>
>>> +    else
>>> +    {
>>> +        spin_lock(&d->arch.paging.lock);
>>> +        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>>> +        spin_unlock(&d->arch.paging.lock);
>>> +    }
>> ... going this path but with a Dom0-only fallback to general allocation
>> wouldn't be the better route.
> 
> IIUC, then it should be something like:
>    static struct page_info *p2m_alloc_page(struct domain *d)
>    {
>        struct page_info *pg;
>        
>        spin_lock(&d->arch.paging.lock);
>        pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>        spin_unlock(&d->arch.paging.lock);
> 
>        if ( !pg && is_hardware_domain(d) )
>        {
>              /* Need to allocate more memory from domheap */
>              pg = alloc_domheap_page(d, MEMF_no_owner);
>              if ( pg == NULL )
>              {
>                  printk(XENLOG_ERR "Failed to allocate pages.\n");
>                  return pg;
>              }
>              ACCESS_ONCE(d->arch.paging.total_pages)++;
>              page_list_add_tail(pg, &d->arch.paging.freelist);
>        }
>     
>        return pg;
> }
> 
> And basically use|d->arch.paging.freelist| for both dom0less and dom0 domains,
> with the only difference being that in the case of 
> Dom0,|d->arch.paging.freelist |could be extended.
> 
> Do I understand your idea correctly?

Broadly yes, but not in the details. For example, I don't think such a
page allocated from the general heap would want appending to freelist.
Commentary and alike also would want tidying.

And of course going forward, for split hardware and control domains the
latter may want similar treatment.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.