[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][PATCH 10/13] tools: extend XENMEM_set_memory_map



On 2015/5/15 15:32, Jan Beulich wrote:
On 15.05.15 at 09:09, <tiejun.chen@xxxxxxxxx> wrote:
On 2015/5/15 14:16, Jan Beulich wrote:
On 15.05.15 at 04:57, <tiejun.chen@xxxxxxxxx> wrote:
On 2015/4/20 21:51, Jan Beulich wrote:
On 10.04.15 at 11:22, <tiejun.chen@xxxxxxxxx> wrote:
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -787,6 +787,70 @@ out:
        return rc;
    }

+static int libxl__domain_construct_memmap(libxl_ctx *ctx,
+                                          libxl_domain_config *d_config,
+                                          uint32_t domid,
+                                          struct xc_hvm_build_args *args,
+                                          int num_pcidevs,
+                                          libxl_device_pci *pcidevs)
+{
+    unsigned int nr = 0, i;
+    /* We always own at least one lowmem entry. */
+    unsigned int e820_entries = 1;
+    uint64_t highmem_end = 0, highmem_size = args->mem_size - 
args->lowmem_size;
+    struct e820entry *e820 = NULL;
+
+    /* Add all rdm entries. */
+    e820_entries += d_config->num_rdms;
+
+    /* If we should have a highmem range. */
+    if (highmem_size)
+    {
+        highmem_end = (1ull<<32) + highmem_size;
+        e820_entries++;
+    }
+
+    e820 = malloc(sizeof(struct e820entry) * e820_entries);
+    if (!e820) {
+        return -1;
+    }
+
+    /* Low memory */
+    e820[nr].addr = 0x100000;
+    e820[nr].size = args->lowmem_size - 0x100000;
+    e820[nr].type = E820_RAM;

If you really mean it to be this lax (not covering the low 1Mb), then
you need to explain why in a comment (and the consuming side
should also have a similar explanation then).


Okay, here may need this,

/*

    * Low RAM starts at least from 1M to make sure all standard regions

    * of the PC memory map, like BIOS, VGA memory-mapped I/O and vgabios,

    * have enough space.
    */
#define GUEST_LOW_MEM_START_DEFAULT 0x100000

But this only states a generic fact, but doesn't explain why you can
lump together all the different things below 1Mb into a single E820
entry.

I'm not sure if I'm misleading you. All different things below 1M is not
in a single entry. Here we just construct these mappings:

#1. [1M, lowmem_end]
#2. [RDM]
#3. [4G, highmem_end]

Those stuffs below 1M are still constructed with multiple e820 entries
by hvmloader. At this point I don't change anything.

Then _that_ is what the comment needs to say.

+    nr++;
+
+    /* RDM mapping */
+    for (i = 0; i < d_config->num_rdms; i++) {
+        /*
+         * We should drop this kind of rdm entry.
+         */
+        if (d_config->rdms[i].flag == LIBXL_RDM_RESERVE_FLAG_INVALID)
+            continue;
+
+        e820[nr].addr = d_config->rdms[i].start;
+        e820[nr].size = d_config->rdms[i].size;
+        e820[nr].type = E820_RESERVED;
+        nr++;
+    }

Is this guaranteed not to produce overlapping entries?


Right, I would add this at the beginning,

       if (e820_entries >= E820MAX) {
           LOG(ERROR, "Ooops! Too many entries in the memory map!\n");
           return -1;
       }

That would be a protection against too many entries, but not against
overlapping ones.


Are you saying these kinds of mapping?

#1. [1M, lowmem_end]
#2. [RDM]
#3. [4G, highmem_end]

Before we call this function we already finish handling RDM with our
policy. This means there's no any overlapping here.

That would be fine then. Note what I had asked: "Is this guaranteed
not to produce overlapping entries?" I.e. if it is guaranteed (which
afaict isn't obvious from the code itself), then please again say why
in a brief comment.


Sorry for this unclear reply previously.

I would summary current reply as a comment.

Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.