[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v2 1/1] Add pci_hole_min_size



On 03/12/2014 11:07 AM, Slutz, Donald Christopher wrote:

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 5c06dfa..72842aa 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -656,6 +656,21 @@ static char ** 
libxl__build_device_model_args_new(libxl__gc *gc,
           } else {
               flexarray_append(dm_args, "xenfv");
           }
+        if (b_info->u.hvm.pci_hole_min_size) {
+            unsigned long long max_ram_below_4g = (1ULL << 32) -
+                b_info->u.hvm.pci_hole_min_size;
+
+            if (max_ram_below_4g > 0xF0000000ULL)
Is this '>' or '<'?

'>' is right.  This is the current value (I had issues getting the inculde file 
that defined this and so hard coded it.)


Then pci_hole_min_size is too small, not too big, isn't it?

-boris



+            {
+                LIBXL__LOG(ctx, LIBXL__LOG_WARNING,
+                           "pci_hole_min_size too big => max_ram_below_4g=%llu > 
%llu (new adjusted value)\n",
+                           max_ram_below_4g, 0xF0000000ULL);
+                max_ram_below_4g = 0xF0000000ULL;
Do you need to adjust pci_hole_min_size as well?

The limiting in hvmloader/pci.c looks to be missing.  I think that the auto 
correction of bad values needs to be done where they are used.

Will add more in next version.

     -Don Slutz


-boris

+            }
+            flexarray_append_pair(dm_args, "-global",
+                           GCSPRINTF("pc-memory-layout.max-ram-below-4g=%llu",
+                                     max_ram_below_4g));
+        }
           for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
               flexarray_append(dm_args, b_info->extra_hvm[i]);
           break;



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.