[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v4 0/9] libxc: support building large pv-domains
The Xen hypervisor supports starting a dom0 with large memory (up to the TB range) by not including the initrd and p2m list in the initial kernel mapping. Especially the p2m list can grow larger than the available virtual space in the initial mapping. The started kernel is indicating the support of each feature via elf notes. This series enables the domain builder in libxc to do the same as the hypervisor. This enables starting of huge pv-domUs via xl. Unmapped initrd is supported for 64 and 32 bit domains, omitting the p2m from initial kernel mapping is possible for 64 bit domains only. Tested with: - 32 bit domU (kernel not supporting unmapped initrd) - 32 bit domU (kernel supporting unmapped initrd) - 1 GB 64 bit domU (kernel supporting unmapped initrd, not p2m) - 1 GB 64 bit domU (kernel supporting unmapped initrd and p2m) - 900GB 64 bit domU (kernel supporting unmapped initrd and p2m) - HVM domU Changes in v4: - updated patch 1 as suggested by Wei Liu (comment and variable name) - modify comment in patch 6 as suggested by Wei Liu - rework of patch 8 reducing line count by nearly 100 - added some additional plausibility checks to patch 8 as suggested by Wei Liu - renamed round_pg() to round_pg_up() in patch 9 as suggested by Wei Liu Changes in v3: - Rebased the complete series to new staging (hvm builder patches by Roger Pau Monne) - Removed old patch 1 as it broke stubdom build - Introduced new Patch 1 to make allocation of guest memory more clear regarding virtual/physical memory allocation (requested by Ian Campbell) - Change name of flag to indicate support of unmapped initrd in patch 2 (requested by Ian Campbell) - Introduce new patches 3, 4, 5 ("rename domain builder count_pgtables to alloc_pgtables", "introduce domain builder architecture specific data", "use domain builder architecture private data for x86 pv domains") to assist later page table work - don't fiddle with initrd virtual address in patch 6 (was patch 3 in v2), add explicit initrd parameters for start_info in struct xc_dom_image instead (requested by Ian Campbell) - Introduce new patch 8 ("rework of domain builder's page table handler") to be able to use common helpers for unmapped p2m list (requested by Ian Campbell) - use now known segment size in pages for p2m list in patch 9 (was patch 5 in v2) instead of fiddling with segment end address (requested by Ian Campbell) - split alloc_p2m_list() in patch 9 (was patch 5 in v2) to 32/64 bit variants (requested by Ian Campbell) Changes in v2: - patch 2 has been removed as it has been applied already - introduced new patch 2 as suggested by Ian Campbell: add a flag indicating support of an unmapped initrd to the parsed elf data of the elf_dom_parms structure - updated patch description of patch 3 as requested by Ian Campbell Juergen Gross (9): libxc: reorganize domain builder guest memory allocator xen: add generic flag to elf_dom_parms indicating support of unmapped initrd libxc: rename domain builder count_pgtables to alloc_pgtables libxc: introduce domain builder architecture specific data libxc: use domain builder architecture private data for x86 pv domains libxc: create unmapped initrd in domain builder if supported libxc: split p2m allocation in domain builder from other magic pages libxc: rework of domain builder's page table handler libxc: create p2m list outside of kernel mapping if supported stubdom/grub/kexec.c | 12 +- tools/libxc/include/xc_dom.h | 34 +-- tools/libxc/xc_dom_arm.c | 6 +- tools/libxc/xc_dom_core.c | 180 ++++++++---- tools/libxc/xc_dom_x86.c | 563 +++++++++++++++++++++---------------- tools/libxc/xg_private.h | 39 +-- xen/arch/x86/domain_build.c | 4 +- xen/common/libelf/libelf-dominfo.c | 3 + xen/include/xen/libelf.h | 1 + 9 files changed, 490 insertions(+), 352 deletions(-) -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |