[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] Make ballooning work with maxmem > mem (i386 version)
Trying to start a guest with maxmem > mem and then balloon up to a value greater than mem is currently failing. This have been already discovered (patch sent some days ago) for x86_64. i386 suffers from the same problem. This patch fixes it. -- Glauber de Oliveira Costa Red Hat Inc. "Free as in Freedom" # HG changeset patch # User Glauber de Oliveira Costa <gcosta@xxxxxxxxxx> # Node ID 2e2e6a3f9b2d0ad21c666dfbdc075f918e0bc7be # Parent 20204db0891b0b7c10959822e3283656c3600500 [LINUX] Extend physical mapping to maxmem instead of mem - i386 version As currently physical mapping only reaches the initial reservation, we're unable to balloon up to more than mem (even when maxmem > mem) in any situation. This have already been found out to be broken and fixed in x86_64 Signed-off-by: Glauber de Oliveira Costa <gcosta@xxxxxxxxxx> diff -r 20204db0891b -r 2e2e6a3f9b2d linux-2.6-xen-sparse/include/asm-i386/mach-xen/setup_arch_post.h --- a/linux-2.6-xen-sparse/include/asm-i386/mach-xen/setup_arch_post.h Thu Nov 2 18:52:04 2006 +++ b/linux-2.6-xen-sparse/include/asm-i386/mach-xen/setup_arch_post.h Fri Nov 10 13:14:14 2006 @@ -13,6 +13,7 @@ { int rc; struct xen_memory_map memmap; + unsigned long arg = DOMID_SELF; /* * This is rather large for a stack variable but this early in * the boot process we know we have plenty slack space. @@ -26,6 +27,11 @@ if ( rc == -ENOSYS ) { memmap.nr_entries = 1; map[0].addr = 0ULL; + rc = HYPERVISOR_memory_op(XENMEM_maximum_reservation, &arg); + if ( rc < 0 ) + map[0].size = PFN_PHYS(xen_start_info->nr_pages); + else + map[0].size = PFN_PHYS(rc); map[0].size = PFN_PHYS(xen_start_info->nr_pages); /* 8MB slack (to balance backend allocations). */ map[0].size += 8ULL << 20; _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |