|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 13/17] xen/riscv: Implement p2m_entry_from_mfn() and support PBMT configuration
On 16.07.2025 18:07, Oleksii Kurochko wrote:
> On 7/16/25 1:31 PM, Jan Beulich wrote:
>> On 15.07.2025 16:47, Oleksii Kurochko wrote:
>>> On 7/1/25 5:08 PM, Jan Beulich wrote:
>>>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>>>> --- a/xen/arch/riscv/p2m.c
>>>>> +++ b/xen/arch/riscv/p2m.c
>>>>> @@ -345,6 +345,26 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain
>>>>> *p2m, gfn_t gfn)
>>>>> return __map_domain_page(p2m->root + root_table_indx);
>>>>> }
>>>>>
>>>>> +static int p2m_type_radix_set(struct p2m_domain *p2m, pte_t pte,
>>>>> p2m_type_t t)
>>>> See comments on the earlier patch regarding naming.
>>>>
>>>>> +{
>>>>> + int rc;
>>>>> + gfn_t gfn = mfn_to_gfn(p2m->domain, mfn_from_pte(pte));
>>>> How does this work, when you record GFNs only for Xenheap pages?
>
>
>>> I think I don't understand what is an issue. Could you please provide
>>> some extra details?
>> Counter question: The mfn_to_gfn() you currently have is only a stub. It only
>> works for 1:1 mapped domains. Can you show me the eventual final
>> implementation
>> of the function, making it possible to use it here?
>
> At the moment, I planned to support only 1:1 mapped domains, so it is final
> implementation.
Isn't that on overly severe limitation?
>>>> In this context (not sure if I asked before): With this use of a radix
>>>> tree,
>>>> how do you intend to bound the amount of memory that a domain can use, by
>>>> making Xen insert very many entries?
>>> I didn’t think about that. I assumed it would be enough to set the amount of
>>> memory a guest domain can use by specifying|xen,domain-p2m-mem-mb| in the
>>> DTS,
>>> or using some predefined value if|xen,domain-p2m-mem-mb| isn’t explicitly
>>> set.
>> Which would require these allocations to come from that pool.
>
> Yes, and it is true only for non-hardware domains with the current
> implementation.
???
>>> Also, it seems this would just lead to the issue you mentioned earlier: when
>>> the memory runs out,|domain_crash()| will be called or PTE will be zapped.
>> Or one domain exhausting memory would cause another domain to fail. A domain
>> impacting just itself may be tolerable. But a domain affecting other domains
>> isn't.
>
> But it seems like this issue could happen in any implementation. It won't
> happen only
> if we will have only pre-populated pool for any domain type (hardware,
> control, guest
> domain) without ability to extend them or allocate extra pages from domheap
> in runtime.
> Otherwise, if extra pages allocation is allowed then we can't really do
> something
> with this issue.
But that's why I brought this up: You simply have to. Or, as indicated, the
moment you mark Xen security-supported on RISC-V, there will be an XSA needed.
This is the kind of thing you need to consider up front. Or at least mark with
a prominent FIXME annotation. All of which would need resolving before even
considering to mark code as supported.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |