[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 13/17] xen/riscv: Implement p2m_entry_from_mfn() and support PBMT configuration


  • To: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 28 Jul 2025 13:49:25 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 28 Jul 2025 11:49:41 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 28.07.2025 13:37, Oleksii Kurochko wrote:
> 
> On 7/28/25 11:09 AM, Jan Beulich wrote:
>> On 28.07.2025 10:52, Oleksii Kurochko wrote:
>>> On 7/23/25 11:46 AM, Jan Beulich wrote:
>>>>> I assume that I have in this case to take some pages for an intermediate 
>>>>> page
>>>>> table from freelist P2M pool, set an owner domain to NULL 
>>>>> (pg->inuse.domain = NULL).
>>>>>
>>>>> Then in this case it isn't clear why pg->list can't be re-used to link 
>>>>> several pages
>>>>> for intermediate page table purposes + metadata? Is it because pg->list 
>>>>> can be not
>>>>> empty? In this case it isn't clear if I could use a page, which has 
>>>>> threaded pages.
>>>> Actually looks like I was mis-remembering. Pages removed from freelist 
>>>> indeed
>>>> aren't put on any other list, so the linking fields are available for use. 
>>>> I
>>>> guess I had x86 shadow code in mind, where the linking fields are further 
>>>> used.
>>> Perhaps, I misunderstood you about "linking fields", but it seems like I 
>>> can't reuse
>>> struct page_info->list as it is used by page_list_add() which is called by 
>>> p2m_alloc_page()
>>> to allocate page(s) for an intermediate page table:
>>>      static inline void
>>>      page_list_add(struct page_info *page, struct page_list_head *head)
>>>      {
>>>           list_add(&page->list, head);
>>>      }
>>>
>>>       struct page_info * paging_alloc_page(struct domain *d)
>>>       {
>>>           struct page_info *pg;
>>>
>>>           spin_lock(&d->arch.paging.lock);
>>>           pg = page_list_remove_head(&d->arch.paging.freelist);
>>>           spin_unlock(&d->arch.paging.lock);
>>>
>>>           INIT_LIST_HEAD(&pg->list);
>>>
>>>           return pg;
>>>       }
>>>
>>>       static struct page_info *p2m_alloc_page(struct domain *d)
>>>       {
>>>           struct page_info *pg = paging_alloc_page(d);
>>>
>>>           if ( pg )
>>>               page_list_add(pg, &p2m_get_hostp2m(d)->pages);
>>>
>>>           return pg;
>>>       }
>>>
>>> So I have to reuse another field from struct page_info. It seems like it 
>>> won't be an
>>> issue if to add a new struct page_list_entry metadata_list to 'union v':
>>>       union {
>>>           /* Page is in use */
>>>           struct {
>>>               /* Owner of this page (NULL if page is anonymous). */
>>>               struct domain *domain;
>>>           } inuse;
>>>
>>>           /* Page is on a free list. */
>>>           struct {
>>>               /* Order-size of the free chunk this page is the head of. */
>>>               unsigned int order;
>>>           } free;
>>> +
>>> +       struct page_list_entry metadata_list;
>>>       } v;
>>>
>>> Am I missing something?
>> Well, you're doubling the size of that union then, aren't you? As was 
>> mentioned
>> quite some time ago, struct page_info needs quite a bit of care when you mean
>> to add new fields there. Question is whether for the purpose here you 
>> actually
>> need a doubly-linked list. A single pointer would be fine to put there.
> 
> Agree, a single pointer will be more then enough.
> 
> I'm thinking if it is possible to do something with the case if someone will 
> try
> to use:
>    #define page_get_owner(p)    (p)->v.inuse.domain
> for a page which was allocated for metadata storage. Shouldn't I have a 
> separate
> list for such pages and a macro which will check if a page is in this list?
> Similar a list which we have for p2m pages in struct p2m_domain:
>      ...
>      /* Pages used to construct the p2m */
>      struct page_list_head pages;
>      ...
> 
> Of course, such pages are allocated by alloc_domheap_page(d, MEMF_no_owner),
> so there is no owner. But if someone will accidentally use this macro for such
> pages then it will be an issue as ->domain likely won't be a NULL anymore.

It's the nature of using unions that such a risk exists. Take a look at x86'es
structure, where several of the fields are re-purposed for shadow pages. It's
something similar you'd do here, in the end.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.