[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 13/17] xen/riscv: Implement p2m_entry_from_mfn() and support PBMT configuration


  • To: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 17 Jul 2025 12:25:30 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Alistair Francis <alistair.francis@xxxxxxx>, Bob Eshleman <bobbyeshleman@xxxxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 17 Jul 2025 10:25:51 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 17.07.2025 10:56, Oleksii Kurochko wrote:
> On 7/16/25 6:18 PM, Jan Beulich wrote:
>> On 16.07.2025 18:07, Oleksii Kurochko wrote:
>>> On 7/16/25 1:31 PM, Jan Beulich wrote:
>>>> On 15.07.2025 16:47, Oleksii Kurochko wrote:
>>>>> On 7/1/25 5:08 PM, Jan Beulich wrote:
>>>>>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>>>>>> --- a/xen/arch/riscv/p2m.c
>>>>>>> +++ b/xen/arch/riscv/p2m.c
>>>>>>> @@ -345,6 +345,26 @@ static pte_t *p2m_get_root_pointer(struct 
>>>>>>> p2m_domain *p2m, gfn_t gfn)
>>>>>>>         return __map_domain_page(p2m->root + root_table_indx);
>>>>>>>     }
>>>>>>>     
>>>>>>> +static int p2m_type_radix_set(struct p2m_domain *p2m, pte_t pte, 
>>>>>>> p2m_type_t t)
>>>>>> See comments on the earlier patch regarding naming.
>>>>>>
>>>>>>> +{
>>>>>>> +    int rc;
>>>>>>> +    gfn_t gfn = mfn_to_gfn(p2m->domain, mfn_from_pte(pte));
>>>>>> How does this work, when you record GFNs only for Xenheap pages?
>>>
>>>>> I think I don't understand what is an issue. Could you please provide
>>>>> some extra details?
>>>> Counter question: The mfn_to_gfn() you currently have is only a stub. It 
>>>> only
>>>> works for 1:1 mapped domains. Can you show me the eventual final 
>>>> implementation
>>>> of the function, making it possible to use it here?
>>> At the moment, I planned to support only 1:1 mapped domains, so it is final
>>> implementation.
>> Isn't that on overly severe limitation?
> 
> I wouldn't say that it's a severe limitation, as it's just a matter of how
> |mfn_to_gfn()| is implemented. When non-1:1 mapped domains are supported,
> |mfn_to_gfn()| can be implemented differently, while the code where it’s 
> called
> will likely remain unchanged.
> 
> What I meant in my reply is that, for the current state and current 
> limitations,
> this is the final implementation of|mfn_to_gfn()|. But that doesn't mean I 
> don't
> see the value in, or the need for, non-1:1 mapped domains—it's just that this
> limitation simplifies development at the current stage of the RISC-V port.

Simplification is fine in some cases, but not supporting the "normal" way of
domain construction looks like a pretty odd restriction. I'm also curious
how you envision to implement mfn_to_gfn() then, suitable for generic use like
the one here. Imo, current limitation or not, you simply want to avoid use of
that function outside of the special gnttab case.

>>>>>> In this context (not sure if I asked before): With this use of a radix 
>>>>>> tree,
>>>>>> how do you intend to bound the amount of memory that a domain can use, by
>>>>>> making Xen insert very many entries?
>>>>> I didn’t think about that. I assumed it would be enough to set the amount 
>>>>> of
>>>>> memory a guest domain can use by specifying|xen,domain-p2m-mem-mb| in the 
>>>>> DTS,
>>>>> or using some predefined value if|xen,domain-p2m-mem-mb| isn’t explicitly 
>>>>> set.
>>>> Which would require these allocations to come from that pool.
>>> Yes, and it is true only for non-hardware domains with the current 
>>> implementation.
>> ???
> 
> I meant that pool is used now only for non-hardware domains at the moment.

And how does this matter here? The memory required for the radix tree doesn't
come from that pool anyway.

>>>>> Also, it seems this would just lead to the issue you mentioned earlier: 
>>>>> when
>>>>> the memory runs out,|domain_crash()| will be called or PTE will be zapped.
>>>> Or one domain exhausting memory would cause another domain to fail. A 
>>>> domain
>>>> impacting just itself may be tolerable. But a domain affecting other 
>>>> domains
>>>> isn't.
>>> But it seems like this issue could happen in any implementation. It won't 
>>> happen only
>>> if we will have only pre-populated pool for any domain type (hardware, 
>>> control, guest
>>> domain) without ability to extend them or allocate extra pages from domheap 
>>> in runtime.
>>> Otherwise, if extra pages allocation is allowed then we can't really do 
>>> something
>>> with this issue.
>> But that's why I brought this up: You simply have to. Or, as indicated, the
>> moment you mark Xen security-supported on RISC-V, there will be an XSA 
>> needed.
> 
> Why it isn't XSA for other architectures? At least, Arm then should have such
> XSA.

Does Arm use a radix tree for storing types? It uses one for mem-access, but
it's not clear to me whether that's actually a supported feature.

> I don't understand why x86 won't have the same issue. Memory is the limited
> and shared resource, so if one of the domain will use to much memory then it 
> could
> happen that other domains won't have enough memory for its purpose...

The question is whether allocations are bounded. With this use of a radix tree,
you give domains a way to have Xen allocate pretty much arbitrary amounts of
memory to populate that tree. That unbounded-ness is the problem, not memory
allocations in general.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.