[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 0/5] DOMCTL-based guest magic region allocation for 11 domUs
On Thu, 18 Apr 2024, Daniel P. Smith wrote: > On 4/9/24 00:53, Henry Wang wrote: > > An error message can seen from the init-dom0less application on > > direct-mapped 1:1 domains: > > ``` > > Allocating magic pages > > memory.c:238:d0v0 mfn 0x39000 doesn't belong to d1 > > Error on alloc magic pages > > ``` > > > > This is because populate_physmap() automatically assumes gfn == mfn > > for direct mapped domains. This cannot be true for the magic pages > > that are allocated later for 1:1 Dom0less DomUs from the init-dom0less > > helper application executed in Dom0. For domain using statically > > allocated memory but not 1:1 direct-mapped, similar error "failed to > > retrieve a reserved page" can be seen as the reserved memory list > > is empty at that time. > > > > This series tries to fix this issue using a DOMCTL-based approach, > > because for 1:1 direct-mapped domUs, we need to avoid the RAM regions > > and inform the toolstack about the region found by hypervisor for > > mapping the magic pages. Patch 1 introduced a new DOMCTL to get the > > guest memory map, currently only used for the magic page regions. > > Patch 2 generalized the extended region finding logic so that it can > > be reused for other use cases such as finding 1:1 domU magic regions. > > Patch 3 uses the same approach as finding the extended regions to find > > the guest magic page regions for direct-mapped DomUs. Patch 4 avoids > > hardcoding all base addresses of guest magic region in the init-dom0less > > application by consuming the newly introduced DOMCTL. Patch 5 is a > > simple patch to do some code duplication clean-up in xc. > > Hey Henry, > > To help provide some perspective, these issues are not experienced with > hyperlaunch. This is because we understood early on that you cannot move a > lightweight version of the toolstack into hypervisor init and not provide a > mechanism to communicate what it did to the runtime control plane. We > evaluated the possible mechanism, to include introducing a new hypercall op, > and ultimately settled on using hypfs. The primary reason is this information > is static data that, while informative later, is only necessary for the > control plane to understand the state of the system. As a result, hyperlaunch > is able to allocate any and all special pages required as part of domain > construction and communicate their addresses to the control plane. As for XSM, > hypfs is already protected and at this time we do not see any domain builder > information needing to be restricted separately from the data already present > in hypfs. > > I would like to make the suggestion that instead of continuing down this path, > perhaps you might consider adopting the hyperlaunch usage of hypfs. Then > adjust dom0less domain construction to allocate the special pages at > construction time. The original hyperlaunch series includes a patch that > provides the helper app for the xenstore announcement. And I can provide you > with updated versions if that would be helpful. I also think that the new domctl is not needed and that the dom0less domain builder should allocate the magic pages. On ARM, we already allocate HVM_PARAM_CALLBACK_IRQ during dom0less domain build and set HVM_PARAM_STORE_PFN to ~0ULL. I think it would be only natural to extend that code to also allocate the magic pages and set HVM_PARAM_STORE_PFN (and others) correctly. If we do it that way it is simpler and consistent with the HVM_PARAM_CALLBACK_IRQ allocation, and we don't even need hypfs. Currently we do not enable hypfs in our safety certifiability configuration.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |