|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] linux-2.6.18/balloon: don't crash in HVM-with-PoD guests
989:a7781c0a3b9a ("xen/balloon: fix balloon driver accounting for
HVM-with-PoD case") was almost entirely broken - the BUG_ON() there
triggers as soon as there's any meaningful amount of excess memory.
Re-implement the logic assuming that XENMEM_get_pod_target will at some
point be allowed for a domain to query on itself. Basing the
calculation on just num_physpages results in significantly too much
memory getting balloned out when there's memory beyond the 4G boundary.
Using what recent upstream's get_num_physpages() returns is not an
alternative because that value is too small (even if not as small as
totalram_pages), resulting in not enough pages getting ballooned out.
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
--- a/drivers/xen/balloon/balloon.c
+++ b/drivers/xen/balloon/balloon.c
@@ -537,17 +537,11 @@ static int __init balloon_init(void)
* extent of 1. When start_extent > nr_extents (>= in newer Xen), we
* simply get start_extent returned.
*/
- totalram_bias = HYPERVISOR_memory_op(rc != -ENOSYS && rc != 1
- ? XENMEM_maximum_reservation : XENMEM_current_reservation,
- &pod_target.domid);
- if ((long)totalram_bias != -ENOSYS) {
- BUG_ON(totalram_bias < totalram_pages);
- bs.current_pages = totalram_bias;
- totalram_bias -= totalram_pages;
- } else {
- totalram_bias = 0;
- bs.current_pages = totalram_pages;
- }
+ bs.current_pages = pod_target.tot_pages + pod_target.pod_entries
+ - pod_target.pod_cache_pages;
+ if (rc || bs.current_pages > num_physpages)
+ bs.current_pages = num_physpages;
+ totalram_bias = bs.current_pages - totalram_pages;
#endif
bs.target_pages = bs.current_pages;
bs.balloon_low = 0;
Attachment:
xenlinux-balloon-HVM-PoD.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |