[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 10/28] libxl: only free cpupoolinfo if necessary in libxl_list_cpupool



[Adding Juergen]

On mer, 2013-09-18 at 15:37 +1200, Matthew Daley wrote:
> Coverity-ID: 1055291
> Signed-off-by: Matthew Daley <mattjd@xxxxxxxxx>
> ---
>  tools/libxl/libxl.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index ca24ca3..eeaaee8 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -650,7 +650,8 @@ libxl_cpupoolinfo * libxl_list_cpupool(libxl_ctx *ctx, 
> int *nb_pool_out)
>          tmp = realloc(ptr, (i + 1) * sizeof(libxl_cpupoolinfo));
>          if (!tmp) {
>              LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "allocating cpupool 
> info");
> -            libxl_cpupoolinfo_list_free(ptr, i);
> +            if (ptr)
> +                libxl_cpupoolinfo_list_free(ptr, i);
>
I'm less confident about libxl_cpupoolinfo_list_* than about topology
and numa info handling via libxl, but I suspect this is something
similar to what I just said in my reply to 12/28 ("libxl: only free
cputopology if it was allocated in libxl__get_numa_candidate").

That is to say, this does not look necessary to me, as
libxl_cpupoolinfo_list_free() seems to cope well with a non-allocated
prt, and making it like this deviates from the usual libxl exit
pattern/error handling style.

Thoughts?

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.