[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 1/4] xen/arm: Alloc hypervisor reserved pages as magic pages for Dom0less DomUs
Hi Julien, On 5/11/2024 4:46 PM, Julien Grall wrote: Hi Henry, On 11/05/2024 01:56, Henry Wang wrote:There are use cases (for example using the PV driver) in Dom0less setup that require Dom0less DomUs start immediately with Dom0, but initialize XenStore later after Dom0's successful boot and call to the init-dom0less application. An error message can seen from the init-dom0less application on 1:1 direct-mapped domains: ``` Allocating magic pages memory.c:238:d0v0 mfn 0x39000 doesn't belong to d1 Error on alloc magic pages ``` The "magic page" is a terminology used in the toolstack as reserved pages for the VM to have access to virtual platform capabilities. Currently the magic pages for Dom0less DomUs are populated by the init-dom0less app through populate_physmap(), and populate_physmap() automatically assumes gfn == mfn for 1:1 direct mapped domains. This cannot be true for the magic pages that are allocated later from the init-dom0less application executed in Dom0. For domain using statically allocated memory but not 1:1 direct-mapped, similar error "failed to retrieve a reserved page" can be seen as the reserved memory list is empty at that time. To solve above issue, this commit allocates hypervisor reserved pages (currently used as the magic pages) for Arm Dom0less DomUs at the domain construction time. The base address/PFN of the region will be noted and communicated to the init-dom0less application in Dom0. Reported-by: Alec Kwapis <alec.kwapis@xxxxxxxxxxxxx> Suggested-by: Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Henry Wang <xin.wang2@xxxxxxx> --- v2: - Reword the commit msg to explain what is "magic page" and use generic terminology "hypervisor reserved pages" in commit msg. (Daniel) - Also move the offset definition of magic pages. (Michal) - Extract the magic page allocation logic to a function. (Michal) --- tools/libs/guest/xg_dom_arm.c | 6 ------ xen/arch/arm/dom0less-build.c | 32 ++++++++++++++++++++++++++++++++ xen/include/public/arch-arm.h | 6 ++++++ 3 files changed, 38 insertions(+), 6 deletions(-)diff --git a/tools/libs/guest/xg_dom_arm.c b/tools/libs/guest/xg_dom_arm.cindex 2fd8ee7ad4..8c579d7576 100644 --- a/tools/libs/guest/xg_dom_arm.c +++ b/tools/libs/guest/xg_dom_arm.c @@ -25,12 +25,6 @@ #include "xg_private.h" -#define NR_MAGIC_PAGES 4 -#define CONSOLE_PFN_OFFSET 0 -#define XENSTORE_PFN_OFFSET 1 -#define MEMACCESS_PFN_OFFSET 2 -#define VUART_PFN_OFFSET 3 - #define LPAE_SHIFT 9 #define PFN_4K_SHIFT (0)diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.cindex 74f053c242..4b96ddd9ce 100644 --- a/xen/arch/arm/dom0less-build.c +++ b/xen/arch/arm/dom0less-build.c@@ -739,6 +739,34 @@ static int __init alloc_xenstore_evtchn(struct domain *d)return 0; } +static int __init alloc_magic_pages(struct domain *d) +{ + struct page_info *magic_pg; + mfn_t mfn; + gfn_t gfn; + int rc; + + d->max_pages += NR_MAGIC_PAGES;Here you bump d->max_mages by NR_MAGIC_PAGES but...+ magic_pg = alloc_domheap_pages(d, get_order_from_pages(NR_MAGIC_PAGES), 0);... here you will allocate using a power-of-two. Which may end up to fail as there is nothing guaranteeing that NR_MAGIC_PAGES is suitably aligned.For now NR_MAGIC_PAGES seems suitably aligned, so it BUILD_BUG_ON() woudl be ok. Great catch! I will add BUILD_BUG_ON(NR_MAGIC_PAGES & (NR_MAGIC_PAGES - 1)); Thanks. + if ( magic_pg == NULL ) + return -ENOMEM; + + mfn = page_to_mfn(magic_pg); + if ( !is_domain_direct_mapped(d) ) + gfn = gaddr_to_gfn(GUEST_MAGIC_BASE); + else + gfn = gaddr_to_gfn(mfn_to_maddr(mfn));Allocating the magic pages contiguously is only necessary for direct mapped domain. For the other it might be preferable to allocate page by page. That said, NR_MAGIC_PAGES is not big enough. So it would be okay.+ + rc = guest_physmap_add_pages(d, gfn, mfn, NR_MAGIC_PAGES); + if ( rc ) + {+ free_domheap_pages(magic_pg, get_order_from_pages(NR_MAGIC_PAGES));+ return rc; + } + + return 0; +} + static int __init construct_domU(struct domain *d, const struct dt_device_node *node) { @@ -840,6 +868,10 @@ static int __init construct_domU(struct domain *d, if ( rc < 0 ) return rc; d->arch.hvm.params[HVM_PARAM_STORE_PFN] = ~0ULL; + + rc = alloc_magic_pages(d); + if ( rc < 0 ) + return rc;This will only be allocated xenstore is enabled. But I don't think some of the magic pages really require xenstore to work. In the future we may need some more fine graine choice (see my comment in patch #2 as well). Sorry, but it seems that by the time that I am writing this reply, I didn't get the email for patch #2 comment. I will reply both together when I see it. This name and the other below are far too generic to be part of the shared header. For NR_MAGIC_PAGES, can you explain why GUEST_MAGIC_SIZE cannot be used? Is it because it is "too" big?} return rc;diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.hindex 289af81bd6..186520d01f 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -476,6 +476,12 @@ typedef uint64_t xen_callback_t; #define GUEST_MAGIC_BASE xen_mk_ullong(0x39000000) #define GUEST_MAGIC_SIZE xen_mk_ullong(0x01000000) +#define NR_MAGIC_PAGES 4 Yes, I don't really want to allocate whole 16MB when we merely need 4 pages. For the rest below...+#define CONSOLE_PFN_OFFSET 0 +#define XENSTORE_PFN_OFFSET 1 +#define MEMACCESS_PFN_OFFSET 2 +#define VUART_PFN_OFFSET 3... the order we allocate the magic pages is purely an internal decision of the toolstack. For the rest of the system, they need to access HVM_PARAM_*. So it doesn't feel like they should be part of the shared headers.If there is a strong reason to include them, then I think they all should be prefixed with GUEST_MAGIC_*. One of the benefits as Michal pointed out in comments for v1 [1] would be this would also allow init-dom0less.h not to re-define XENSTORE_PFN_OFFSET. I will rename them as suggested if both of you are ok with moving them to the arch header. [1] https://lore.kernel.org/xen-devel/5d3ead96-5079-4fa3-b5fd-4d9803c251b6@xxxxxxx/ Kind regards, Henry
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |