[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 for-4.5] libxl: account for romfile memory



On Tue, 25 Nov 2014, Ian Campbell wrote:
> On Tue, 2014-11-25 at 12:43 +0000, Stefano Stabellini wrote:
> > Account for the extra memory needed for the rom files of any emulated nics:
> > QEMU uses xc_domain_populate_physmap_exact to allocate the memory for
> > each them. Assume 256K each.
> 
> I suppose this will have to do for 4.5. Can we do something better in
> the future -- like figuring out a way for guests to have
> "not-really-RAM" allocations like this which are made by the toolstack
> and happen to be backed by RAM not count or something.
> 
> > 
> > This patch fixes a QEMU abort() when more than 4 emulated nics are
> > assigned to a VM.
> 
> Are you also going to fix qemu to fail gracefully if it cannot deploy
> option roms? abort() seems a bit extreme.
> 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > CC: Don Slutz <dslutz@xxxxxxxxxxx>
> > CC: hanyandong <hanyandong@xxxxxxxxx>
> > CC: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> > CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
> > CC: Wei Liu <wei.liu2@xxxxxxxxxx>
> 
> You missed Ian J. I've added him.

Actually Wei suggested a better alternative: I could call
xc_domain_setmaxmem directly from QEMU. That makes much more sense.


> > 
> > ---
> > 
> > Changes in v2:
> > - remove double return statement;
> > - check for return errors;
> > - check for overflows.
> > ---
> >  tools/libxl/libxl.c          |   53 
> > +++++++++++++++++++++++++++++++++++++-----
> >  tools/libxl/libxl_dom.c      |    8 +++++--
> >  tools/libxl/libxl_internal.h |    7 ++++++
> >  3 files changed, 60 insertions(+), 8 deletions(-)
> > 
> > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> > index de23fec..2cdb768 100644
> > --- a/tools/libxl/libxl.c
> > +++ b/tools/libxl/libxl.c
> > @@ -4527,13 +4527,40 @@ out:
> >  
> >  
> > /******************************************************************************/
> >  
> > +int libxl__get_rom_memory_kb(libxl__gc *gc, uint32_t domid, 
> > libxl_domain_config *d_config)
> > +{
> > +    int i, romsize, rc;
> > +    libxl_domain_config local_d_config;
> > +    libxl_ctx *ctx = libxl__gc_owner(gc);
> > +
> > +    if (d_config == NULL) {
> > +        libxl_domain_config_init(&local_d_config);
> > +        rc = libxl_retrieve_domain_configuration(ctx, domid, 
> > &local_d_config);
> > +        if (rc < 0)
> > +            return rc;
> > +        d_config = &local_d_config;
> > +    }
> 
> Perhaps we could store the answer to this function in XS when we build
> the domain and simply read it back and account for it in the places
> which use it?
> 
> Apart from being rather costly reparsing the json every time is going to
> behave a bit strangely if NICs are plugged/unplugged at runtime and
> ballooning is going on.
> 
> > +    if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_PV)
> > +        return 0;
> > +
> > +    for (i = 0, romsize = 0;
> > +         i < d_config->num_nics && romsize < INT_MAX;
> 
> I don't think that romsize < INT_MAX is useful except in the case that
> romsize+= results in romsize == INT_MAX. If you actually overflow then
> romsize becomes negative which satisfies the condition (and in any case
> you are into undefined behaviour territory there anyhow, I think).
> 
> Given that INT_MAX is a boat load of ROMs I'd be inclined to just limit
> it to INT_MAX/2 or /4 or something.
> 
> Or you could do romsize < (INT_MAX - LIBXL_ROMSIZE_KB) I suppose.
> 
> > +    rc = xc_domain_setmaxmem(ctx->xch, domid,
> > +                             max_memkb + LIBXL_MAXMEM_CONSTANT
> > +                             + romsize);
> 
> Seems like we ought to have a helper to return the memory overheads,
> which would be the constant + the romsize starting from now...
> 
> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.