|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM guest
The original code is wrong because:
* claim mode wants to know the total number of pages needed while
original code provides the additional number of pages needed.
* if pod is enabled memory will already be allocated by the time we try
to claim memory.
So the fix would be:
* move claim mode before actual memory allocation.
* pass the right number of pages to hypervisor.
The "right number of pages" should be number of pages of target memory
minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
This fixes bug #32.
Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: Konrad Wilk <konrad.wilk@xxxxxxxxxx>
Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
---
WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
mode is complete broken. If this patch is deemed too complicated, we
should flip the switch to disable claim mode by default for 4.4.
---
tools/libxc/xc_hvm_build_x86.c | 36 +++++++++++++++++++++++-------------
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..dd3b522 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -49,6 +49,8 @@
#define NR_SPECIAL_PAGES 8
#define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
+#define VGA_HOLE_SIZE (0x20)
+
static int modules_init(struct xc_hvm_build_args *args,
uint64_t vend, struct elf_binary *elf,
uint64_t *mstart_out, uint64_t *mend_out)
@@ -302,14 +304,31 @@ static int setup_guest(xc_interface *xch,
for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
page_array[i] += mmio_size >> PAGE_SHIFT;
+ /*
+ * Try to claim pages for early warning of insufficient memory available.
+ * This should go before xc_domain_set_pod_target, becuase that function
+ * actually allocates memory for the guest. Claiming after memory has been
+ * allocated is pointless.
+ */
+ if ( claim_enabled ) {
+ rc = xc_domain_claim_pages(xch, dom, target_pages - VGA_HOLE_SIZE);
+ if ( rc != 0 )
+ {
+ PERROR("Could not allocate memory for HVM guest as we cannot claim
memory!");
+ goto error_out;
+ }
+ }
+
if ( pod_mode )
{
/*
- * Subtract 0x20 from target_pages for the VGA "hole". Xen will
- * adjust the PoD cache size so that domain tot_pages will be
- * target_pages - 0x20 after this call.
+ * Subtract VGA_HOLE_SIZE from target_pages for the VGA
+ * "hole". Xen will adjust the PoD cache size so that domain
+ * tot_pages will be target_pages - VGA_HOLE_SIZE after
+ * this call.
*/
- rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
+ rc = xc_domain_set_pod_target(xch, dom,
+ target_pages - VGA_HOLE_SIZE,
NULL, NULL, NULL);
if ( rc != 0 )
{
@@ -333,15 +352,6 @@ static int setup_guest(xc_interface *xch,
cur_pages = 0xc0;
stat_normal_pages = 0xc0;
- /* try to claim pages for early warning of insufficient memory available */
- if ( claim_enabled ) {
- rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
- if ( rc != 0 )
- {
- PERROR("Could not allocate memory for HVM guest as we cannot claim
memory!");
- goto error_out;
- }
- }
while ( (rc == 0) && (nr_pages > cur_pages) )
{
/* Clip count to maximum 1GB extent. */
--
1.7.10.4
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |