[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Xen/ARM API issue (page size)
Hi,I will answer the two e-mails at the same time as my answer will be similar :). On 08/07/2021 02:05, Andrew Cooper wrote: On 08/07/2021 01:32, Elliott Mitchell wrote:Hopefully I'm not about to show the limits of my knowledge... Quite a few values passed to Xen via hypercalls include a page number. This makes sense as that maps to the hardware. Problem is, I cannot help but notice aarch64 allows for 4KB, 16KB and 64KB pages.Yes - page size is a know error through the ABI, seeing as Xen started on x86 and 4k is the only size considered at the time. 32bit frame numbers were all the rage between the Pentum 2 (1997) and the advent of 64bit systems (~2006), because they let you efficiently reference up to 16T of physical memory, rather than being limited at 4G if you used byte addresses instead. It will be addressed in ABIv2 design (if I ever get enough time to write everything down and make a start). IIRC, ABIv2 will only focus on the interface between the hypervisor and the guests. However, I think we will also need to update the PV protocol so two domains agree on the page granularity used. I don't know how flexible aarch64 is. I don't know whether an aarch64 core can support multiple page sizes. My tentative reading of information seemed to suggest a typical aarch64 core /could/ allow multiple page sizes. The Arm architecture allows the hypervisor and the kernel to chose its own granularity. IOW, a kernel may use 4KB when the hypervisor use 64KB. Most of the arm64 cores supports all the page granularity. That said, this is not a requirement from the Arm Arm, so it may be possible to have cores only supporting a subset of the page granularity. What happens if a system (and Xen) is setup to support 64KB pages, but a particular domain has been built strictly with 4KB page support? If the processor only support 64KB, then you would not be able to boot a 4KB kernel there. Today the hypercall ABI using the same page granularity as the hypervisor. IOW, the domain would need to break its page in 4KB chunk to talk to the hypervisor.What if a particular domain wanted to use 64KB pages (4KB being too granular), but Xen was set to use 4KB pages? FWIW, this is how Linux with 64KB/16KB page granularity is able to run on current Xen. What if a system had two domains which were set for different page sizes, but the two needed to interact? They would need to agree on the page granularity used. At the moment, this is implicitely fixed to 4KB. I'm afraid I'll have to defer to the arm folk to answer this, but my understanding is that it should be possible to support guests compiled with, and using, different page sizes (given a suitable ABI).Then you have things like VCPUOP_register_vcpu_info. The structure is setup as mfn and offset. With the /actual/ page size being used there, it is troublesome. Several places might work better if pure 64-bit addresses were used, but with alignment requirements specified.The way to fix size problems is to mandate that all addresses in the ABI are full byte addresses, not frame numbers. When alignment is required, and it frequently is, it is fine to use the lower bits for metadata. Critically, what his means is that you don't need separate API/ABI's based on page size. e.g. "please balloon out this page" operates "on the alignment the guest is using", rather than needing separate ops for 4k/2M/1G (to list the x86 page sizes only). I think the full address is not sufficient here. The stage-2 page-table (aka EPT on x86) is using the page granularity of the hypervisor. So for anything that requires change in the P2M, the domain needs to make sure the address is aligned to the page granularity of the hypervisor. Cheers, -- Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |