[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 13/17] xen/riscv: Implement p2m_entry_from_mfn() and support PBMT configuration
On 7/16/25 1:31 PM, Jan Beulich wrote:
On 15.07.2025 16:47, Oleksii Kurochko wrote:On 7/1/25 5:08 PM, Jan Beulich wrote:On 10.06.2025 15:05, Oleksii Kurochko wrote:--- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -345,6 +345,26 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn) return __map_domain_page(p2m->root + root_table_indx); } +static int p2m_type_radix_set(struct p2m_domain *p2m, pte_t pte, p2m_type_t t)See comments on the earlier patch regarding naming.+{ + int rc; + gfn_t gfn = mfn_to_gfn(p2m->domain, mfn_from_pte(pte));How does this work, when you record GFNs only for Xenheap pages?
I think I don't understand what is an issue. Could you please provide some extra details?Counter question: The mfn_to_gfn() you currently have is only a stub. It only works for 1:1 mapped domains. Can you show me the eventual final implementation of the function, making it possible to use it here? At the moment, I planned to support only 1:1 mapped domains, so it is final implementation. I think that I understand your initial question. So yes, at the moment, we have only Xenheap pages and as for such pages we have stored GFNs it will be easy to recover gfn for mfn, and so it will be easy to implement mfn_to_gfn() for Xenheap pages. Having such stubs, and not even annotated in any way, is imo a problem: People may thing they're fine to use when really they aren't. Then more correct will be to pass GFN through an argument as you suggested earlier (and I've already added such argument). I just initially made incorrect suggestion that it is a question to an implementation of mfn_to_gfn() to provide such implementation which supports any type of page. +static pte_t p2m_entry_from_mfn(struct p2m_domain *p2m, mfn_t mfn, p2m_type_t t, p2m_access_t a) +{ + pte_t e = (pte_t) { 1 };What's the 1 doing here?Set valid bit of PTE to 1.But something like this isn't to be done using a plain, unannotated literal number. Aiui you mean PTE_VALID here. Yes. I will use PTE_VALID instead. + switch ( t ) + { + case p2m_mmio_direct_dev: + e.pte |= PTE_PBMT_IO; + break; + + default: + break; + } + + p2m_set_permission(&e, t, a); + + ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK)); + + pte_set_mfn(&e, mfn);Based on how things work on x86 (and how I would have expected them to also work on Arm), may I suggest that you set MFN ahead of permissions, so that the permissions setting function can use the MFN for e.g. a lookup in mmio_ro_ranges.Sure, just a note that on Arm, the MFN is set last.That's apparently because they (still) don't have mmio_ro_ranges. That's only a latent issue (I hope) while they still don't have PCI support.+ BUG_ON(p2m_type_radix_set(p2m, e, t));I'm not convinced of this error handling here either. Radix tree insertion _can_ fail, e.g. when there's no memory left. This must not bring down Xen, or we'll have an XSA right away. You could zap the PTE, or if need be you could crash the offending domain.IIUC what is "zap the PTE", then I will do in this way: if ( p2m_set_type(p2m, e, t) ) e.pte = 0; But then it will lead to an MMU failure—how is that expected to be handled? There’s no guarantee that, at the moment of handling this exception, enough memory will be available to set a type for the PTE and also there is not really clear how to detect in exception handler that it is needed just to re-try to set a type. Or should we just call|domain_crash()|? In that case, it seems more reasonable to call|domain_crash() |immediately in |p2m_pte_from_mfn().|As said - crashing the domain in such an event is an option. The question here is whether to do so right away, or whether to defer that in the hope that the PTE may not actually be accessed (before being rewritten).In this context (not sure if I asked before): With this use of a radix tree, how do you intend to bound the amount of memory that a domain can use, by making Xen insert very many entries?I didn’t think about that. I assumed it would be enough to set the amount of memory a guest domain can use by specifying|xen,domain-p2m-mem-mb| in the DTS, or using some predefined value if|xen,domain-p2m-mem-mb| isn’t explicitly set.Which would require these allocations to come from that pool. Yes, and it is true only for non-hardware domains with the current implementation. Also, it seems this would just lead to the issue you mentioned earlier: when the memory runs out,|domain_crash()| will be called or PTE will be zapped.Or one domain exhausting memory would cause another domain to fail. A domain impacting just itself may be tolerable. But a domain affecting other domains isn't. But it seems like this issue could happen in any implementation. It won't happen only if we will have only pre-populated pool for any domain type (hardware, control, guest domain) without ability to extend them or allocate extra pages from domheap in runtime. Otherwise, if extra pages allocation is allowed then we can't really do something with this issue. ~ Oleksii
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |