|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v1 3/6] xen/riscv: construct the P2M pages pool for guests
On 09.05.2025 17:57, Oleksii Kurochko wrote:
> Implement p2m_set_allocation() to construct p2m pages pool for guests
> based on required number of pages.
>
> This is implemented by:
> - Adding a `struct paging_domain` which contains a freelist, a
> counter variable and a spinlock to `struct arch_domain` to
> indicate the free p2m pages and the number of p2m total pages in
> the p2m pages pool.
> - Adding a helper `p2m_set_allocation` to set the p2m pages pool
> size. This helper should be called before allocating memory for
> a guest and is called from domain_p2m_set_allocation(), the latter
> is a part of common dom0less code.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
As already indicated in reply to patch 2, I expect this pool will want
introducing ahead of that.
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -1,4 +1,12 @@
> #include <xen/domain_page.h>
> +/*
> + * Because of general_preempt_check() from xen/sched.h which uses
> + * local_events_need_delivery() but latter is declared in <asm/event.h>.
> + * Thereby it is needed to icnlude <xen/event.h> here before xen/sched.h.
> + *
> + * Shouldn't be xen/event.h be included in <xen/sched.h>?
> + */
> +#include <xen/event.h>
The question doesn't belong here; such could be put in the post-commit-
message area. And the answer depends on what dependency you found missing.
> @@ -166,3 +176,60 @@ int p2m_init(struct domain *d)
>
> return 0;
> }
> +
> +/*
> + * Set the pool of pages to the required number of pages.
> + * Returns 0 for success, non-zero for failure.
> + * Call with d->arch.paging.lock held.
> + */
> +int p2m_set_allocation(struct domain *d, unsigned long pages, bool
> *preempted)
> +{
> + struct page_info *pg;
> +
> + ASSERT(spin_is_locked(&d->arch.paging.lock));
> +
> + for ( ; ; )
> + {
> + if ( d->arch.paging.p2m_total_pages < pages )
> + {
> + /* Need to allocate more memory from domheap */
> + pg = alloc_domheap_page(d, MEMF_no_owner);
> + if ( pg == NULL )
> + {
> + printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
> + return -ENOMEM;
> + }
> + ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
> + d->arch.paging.p2m_total_pages + 1;
Looks like you copied this from Arm, but this code is bogus: Using
ACCESS_ONCE() just on the lhs is pretty pointless. Once also used on the
rhs the expression can easily become
ACCESS_ONCE(d->arch.paging.p2m_total_pages) += 1;
or even
ACCESS_ONCE(d->arch.paging.p2m_total_pages)++;
.
> + page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
> + }
> + else if ( d->arch.paging.p2m_total_pages > pages )
> + {
> + /* Need to return memory to domheap */
> + pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
> + if( pg )
> + {
> + ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
> + d->arch.paging.p2m_total_pages - 1;
Same here then, obviously.
> + free_domheap_page(pg);
> + }
> + else
> + {
> + printk(XENLOG_ERR
> + "Failed to free P2M pages, P2M freelist is empty.\n");
> + return -ENOMEM;
> + }
> + }
> + else
> + break;
> +
> + /* Check to see if we need to yield and try again */
> + if ( preempted && general_preempt_check() )
While it's this way on both Arm and x86, I wonder how useful it is
to check on every iteration, especially when freeing pages back to the
buddy allocator.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |