[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Claim mode and HVM PoD interact badly



On Fri, 2014-01-10 at 09:58 -0500, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > create ^
> > owner Wei Liu <wei.liu2@xxxxxxxxxx>
> > thanks
> > 
> > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > When I have following configuration in HVM config file:
> > >   memory=128
> > >   maxmem=256
> > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > 
> > > xc: error: Could not allocate memory for HVM guest as we cannot claim 
> > > memory! (22 = Invalid argument): Internal error
> > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot 
> > > (re-)build domain: -3
> > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device 
> > > model pid in /local/domain/82/image/device-model-pid
> > > libxl: error: libxl.c:1425:libxl__destroy_domid: 
> > > libxl__destroy_device_model failed for 82
> > > 
> > > With claim_mode=0, I can sucessfuly create HVM guest.
> > 
> > Is it trying to claim 256M instead of 128M? (although the likelyhood
> 
> No. 128MB actually.
> 
> > that you only have 128-255M free is quite low, or are you
> > autoballooning?)
> 
> This patch fixes it for me. It basically sets the amount of pages
> claimed to be 'maxmem' instead of 'memory' for PoD.
> 
> I don't know PoD very well,

Me neither, this might have to wait for George to get back.

We should also consider flipping the default claim setting to off in xl
for 4.4, since that is likely to be a lower impact change than fixing
the issue (and one which we all understand!).

>  and this claim is only valid during the
> allocation of the guests memory - so the 'target_pages' value might be
> the wrong one. However looking at the hypervisor's
> 'p2m_pod_set_mem_target' I see this comment:
> 
>  316  *     B <T': Set the PoD cache size equal to the number of outstanding 
> PoD
>  317  *   entries.  The balloon driver will deflate the balloon to give back
>  318  *   the remainder of the ram to the guest OS.
> 
> Which implies to me that we _need_ the 'maxmem' amount of memory at boot time.
> And then it is the responsibility of the balloon driver to give the memory
> back (and this is where the 'static-max' et al come in play to tell the
> balloon driver to balloon out).

PoD exists purely so that we don't need the 'maxmem' amount of memory at
boot time. It is basically there in order to let the guest get booted
far enough to load the balloon driver to give the memory back.

It's basically a boot time zero-page sharing mechanism AIUI.

> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..65e9577 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
>  
>      /* try to claim pages for early warning of insufficient memory available 
> */
>      if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> +        unsigned long nr = nr_pages - cur_pages;
> +
> +        if ( pod_mode )
> +            nr = target_pages - 0x20;

0x20?

> +
> +        rc = xc_domain_claim_pages(xch, dom, nr);
>          if ( rc != 0 )
>          {
>              PERROR("Could not allocate memory for HVM guest as we cannot 
> claim memory!");



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.