[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/6] xl: Implement XENMEM_claim_pages support via 'claim_mode' global config



On Thu, Mar 28, 2013 at 04:39:57PM +0000, Ian Jackson wrote:
> Konrad Rzeszutek Wilk writes ("[PATCH 2/6] xl: Implement XENMEM_claim_pages 
> support via 'claim_mode' global config"):
> > The XENMEM_claim_pages hypercall operates per domain and it should be
> > used system wide. As such this patch introduces a global configuration
> > option 'claim_mode' that by default is disabled.
> 
> This mostly looks good to me.
> 
> > +=item B<claim_mode=BOOLEAN>
> > +
> > +If this option is enabled then when a guest is created there will be an
> > +guarantee that there is memory available for the guest. This is an 
> > particularly
> 
> Sorry to be picky, but can I ask you to wrap this document to 70-75
> characters ?  At this width (looks like exactly 80) it inevitably
> generates wrap damage when a patch or quoted code is shown in an
> 80-column window.

Of course.
> 
> > +Note that to enable tmem type guest, one need to provide C<tmem> on the
> > +Xen hypervisor argument and as well on the Linux kernel command line.
> 
> "to enable tmem type guest" - shouldn't that be "guests" ?  And "one
> needs" ?

Yes. Fixed.
> 
> > +    if (!xlu_cfg_get_long (config, "claim_mode", &l, 0))
> > +        global_claim_mode = 1;
> 
> This should set global_claim_mode to something depending on l, not 1,
> I think ?

Yes.
> 
> And perhaps global_claim_mode should be a libxl_defbool, so that we
> can inherit the libxl default more directly ?  At the moment the

Yes.

The one odditiy is that I had to use the set_default_bool in the
parse_global_config - otherwise with the last patch
(xl: 'xl info' print outstanding claims if enabled (claim_mode=1 in xl.conf)
we would have gotten the non-initialized value and the assert would
be triggered.

This is b/c the libxl__domain_create_info_setdefault does not get
called when 'xl info' is called.


> default is set in xl and again in libxl, which I think is not ideal.
> It's better to try to set the default only in one place.

Right. Done.
> 
> Also I think you need to call libxl_defbool_setdefault somewhere in
> libxl, probably in libxl__domain_create_info_setdefault.

I did it in libxl__domain_build_info_setdefault since it is part of the b_info
structure. But I am wondering if that could be removed as we do
it also in parse_global_config?

> 
> Is there some reason why this variable is called "global_claim_mode"
> and not just "claim_mode" ?  Other globals in xl aren't marked in this
> way.

Brainfart on my side. Fixed.

Is this OK with you? Should I remove the libxl_defbool_setdefault
in the libxl__domain_create_info_setdefault?

commit 2e11cfaa12aef6ef857a77dd63b256efb24fc99d
Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date:   Mon Mar 18 16:23:39 2013 -0400

    xl: Implement XENMEM_claim_pages support via 'claim_mode' global config
    
    The XENMEM_claim_pages hypercall operates per domain and it should be
    used system wide. As such this patch introduces a global configuration
    option 'claim_mode' that by default is disabled.
    
    If this option is enabled then when a guest is created there will be an
    guarantee that there is memory available for the guest. This is an
    particularly acute problem on hosts with memory over-provisioned guests
    that use tmem and have self-balloon enabled (which is the default option
    for them). The self-balloon mechanism can deflate/inflate the balloon
    quickly and the amount of free memory (which 'xl info' can show) is stale
    the moment it is printed. When claim is enabled a reservation for the
    amount of memory ('memory' in guest config) is set, which is then reduced
    as the domain's memory is populated and eventually reaches zero.
    
    If the reservation cannot be meet the guest creation fails immediately
    instead of taking seconds/minutes (depending on the size of the guest)
    while the guest is populated.
    
    Note that to enable tmem type guests, one needs to provide 'tmem' on the
    Xen hypervisor argument and as well on the Linux kernel command line.
    
    There are two boolean options:
    
    (0) No claim is made. Memory population during guest creation will be
    attempted as normal and may fail due to memory exhaustion.
    
    (1) Normal memory and freeable pool of ephemeral pages (tmem) is used when
    calculating whether there is enough memory free to launch a guest.
    This guarantees immediate feedback whether the guest can be launched due
    to memory exhaustion (which can take a long time to find out if launching
    massively huge guests) and in parallel.
    
    [v1: Removed own claim_mode type, using just bool, improved docs, all per
    Ian's suggestion]
    [v2: Updated the comments]
    [v3: Rebase on top 733b9c524dbc2bec318bfc3588ed1652455d30ec (xl: add 
vif.default.script)]
    [v4: Fixed up comments]
    [v5: s/global_claim_mode/claim_mode/]
    [v6: Ian Jackson's feedback: use libxl_defbool, better comments, etc]
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>

diff --git a/docs/man/xl.conf.pod.5 b/docs/man/xl.conf.pod.5
index 7b9fcac..08c7120 100644
--- a/docs/man/xl.conf.pod.5
+++ b/docs/man/xl.conf.pod.5
@@ -108,6 +108,47 @@ Configures the name of the first block device to be used 
for temporary
 block device allocations by the toolstack.
 The default choice is "xvda".
 
+=item B<claim_mode=BOOLEAN>
+
+If this option is enabled then when a guest is created there will be an
+guarantee that there is memory available for the guest. This is an
+particularly acute problem on hosts with memory over-provisioned guests
+that use tmem and have self-balloon enabled (which is the default
+option). The self-balloon mechanism can deflate/inflate the balloon
+quickly and the amount of free memory (which C<xl info> can show) is
+stale the moment it is printed. When claim is enabled a reservation for
+the amount of memory (see 'memory' in xl.conf(5)) is set, which is then
+reduced as the domain's memory is populated and eventually reaches zero.
+
+If the reservation cannot be meet the guest creation fails immediately
+instead of taking seconds/minutes (depending on the size of the guest)
+while the guest is populated.
+
+Note that to enable tmem type guests, one needs to provide C<tmem> on the
+Xen hypervisor argument and as well on the Linux kernel command line.
+
+Note that the claim call is not attempted if C<superpages> option is
+used in the guest config (see xl.cfg(5)).
+
+Default: C<0>
+
+=over 4
+
+=item C<0>
+
+No claim is made. Memory population during guest creation will be
+attempted as normal and may fail due to memory exhaustion.
+
+=item C<1>
+
+Normal memory and freeable pool of ephemeral pages (tmem) is used when
+calculating whether there is enough memory free to launch a guest.
+This guarantees immediate feedback whether the guest can be launched due
+to memory exhaustion (which can take a long time to find out if launching
+massively huge guests).
+
+=back
+
 =back
 
 =head1 SEE ALSO
diff --git a/tools/examples/xl.conf b/tools/examples/xl.conf
index b0caa32..f386bb9 100644
--- a/tools/examples/xl.conf
+++ b/tools/examples/xl.conf
@@ -26,3 +26,9 @@
 
 # default bridge device to use with vif-bridge hotplug scripts
 #vif.default.bridge="xenbr0"
+
+# Reserve a claim of memory when launching a guest. This guarantees immediate
+# feedback whether the guest can be launched due to memory exhaustion
+# (which can take a long time to find out if launching huge guests).
+# see xl.conf(5) for details.
+#claim_mode=0
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 030aa86..c4ad58b 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -579,7 +579,6 @@ int libxl_wait_for_free_memory(libxl_ctx *ctx, uint32_t 
domid, uint32_t memory_k
 /* wait for the memory target of a domain to be reached */
 int libxl_wait_for_memory_target(libxl_ctx *ctx, uint32_t domid, int 
wait_secs);
 
-
 int libxl_vncviewer_exec(libxl_ctx *ctx, uint32_t domid, int autopass);
 int libxl_console_exec(libxl_ctx *ctx, uint32_t domid, int cons_num, 
libxl_console_type type);
 /* libxl_primary_console_exec finds the domid and console number
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 30a4507..ae72f21 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -196,6 +196,8 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
     if (b_info->target_memkb == LIBXL_MEMKB_DEFAULT)
         b_info->target_memkb = b_info->max_memkb;
 
+    libxl_defbool_setdefault(&b_info->claim_mode, false);
+
     libxl_defbool_setdefault(&b_info->localtime, false);
 
     libxl_defbool_setdefault(&b_info->disable_migrate, false);
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 2dd429f..92a6628 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -371,6 +371,7 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
     dom->console_domid = state->console_domid;
     dom->xenstore_evtchn = state->store_port;
     dom->xenstore_domid = state->store_domid;
+    dom->claim_enabled = libxl_defbool_val(info->claim_mode);
 
     if ( (ret = xc_dom_boot_xen_init(dom, ctx->xch, domid)) != 0 ) {
         LOGE(ERROR, "xc_dom_boot_xen_init failed");
@@ -605,7 +606,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
      */
     args.mem_size = (uint64_t)(info->max_memkb - info->video_memkb) << 10;
     args.mem_target = (uint64_t)(info->target_memkb - info->video_memkb) << 10;
-
+    args.claim_enabled = libxl_defbool_val(info->claim_mode);
     if (libxl__domain_firmware(gc, info, &args)) {
         LOG(ERROR, "initializing domain firmware failed");
         goto out;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index f3c212b..0f1f118 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -293,7 +293,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("ioports",          Array(libxl_ioport_range, "num_ioports")),
     ("irqs",             Array(uint32, "num_irqs")),
     ("iomem",            Array(libxl_iomem_range, "num_iomem")),
-
+    ("claim_mode",          libxl_defbool),
     ("u", KeyedUnion(None, libxl_domain_type, "type",
                 [("hvm", Struct(None, [("firmware",         string),
                                        ("bios",             libxl_bios_type),
diff --git a/tools/libxl/xl.c b/tools/libxl/xl.c
index 4c598db..211facd 100644
--- a/tools/libxl/xl.c
+++ b/tools/libxl/xl.c
@@ -45,6 +45,7 @@ char *default_vifscript = NULL;
 char *default_bridge = NULL;
 char *default_gatewaydev = NULL;
 enum output_format default_output_format = OUTPUT_FORMAT_JSON;
+libxl_defbool claim_mode;
 
 static xentoollog_level minmsglevel = XTL_PROGRESS;
 
@@ -134,6 +135,10 @@ static void parse_global_config(const char *configfile,
     }
     if (!xlu_cfg_get_string (config, "blkdev_start", &buf, 0))
         blkdev_start = strdup(buf);
+
+    libxl_defbool_setdefault(&claim_mode, false);
+    (void)xlu_cfg_get_defbool (config, "claim_mode", &claim_mode, 0);
+
     xlu_cfg_destroy(config);
 }
 
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index b881f92..4c5e5d1 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -145,6 +145,7 @@ int xl_child_pid(xlchildnum); /* returns 0 if child struct 
is not in use */
 extern int autoballoon;
 extern int run_hotplug_scripts;
 extern int dryrun_only;
+extern libxl_defbool claim_mode;
 extern char *lockfile;
 extern char *default_vifscript;
 extern char *default_bridge;
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 2d40f8f..c8b0a99 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -757,6 +757,8 @@ static void parse_config_data(const char *config_source,
     if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
         b_info->max_memkb = l * 1024;
 
+    b_info->claim_mode = claim_mode;
+
     if (xlu_cfg_get_string (config, "on_poweroff", &buf, 0))
         buf = "destroy";
     if (!parse_action_on_shutdown(buf, &d_config->on_poweroff)) {

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.