[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] tools: Fix wild memory allocations from c/s 250f0b4 and 85d78b4



On 05/18/2015 08:57 AM, Andrew Cooper wrote:
These changesets cause the respective libxc functions to unconditonally
dereference their max_cpus/nodes parameters as part of initial memory
allocations.  It will fail at obtaining the correct number of cpus/nodes from
Xen, as the guest handles will not be NULL.

Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
CC: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
CC: Wei Liu <wei.liu2@xxxxxxxxxx>
CC: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>

---
Spotted by XenServers Coverity run.
---
  tools/libxl/libxl.c               |    4 ++--
  tools/misc/xenpm.c                |    4 ++--
  tools/python/xen/lowlevel/xc/xc.c |    4 ++--
  3 files changed, 6 insertions(+), 6 deletions(-)

xenpm bug is already fixed (commit b315cd9cce5b6da7ca89b2d7bad3fb01e7716044 n the staging tree).

I am not sure I understand why Coverity complains about other spots. For example, in libxl_get_cpu_topology() num_cpus can be left uninitialized only if xc_cputopoinfo(ctx->xch, &num_cpus, NULL) fails, in which case we go to 'GC_FREE; return ret;', so it's not ever used.


-boris



diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index a6eb2df..295877b 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5105,7 +5105,7 @@ libxl_cputopology *libxl_get_cpu_topology(libxl_ctx *ctx, 
int *nb_cpu_out)
      xc_cputopo_t *cputopo;
      libxl_cputopology *ret = NULL;
      int i;
-    unsigned num_cpus;
+    unsigned num_cpus = 0;
/* Setting buffer to NULL makes the call return number of CPUs */
      if (xc_cputopoinfo(ctx->xch, &num_cpus, NULL))
@@ -5191,7 +5191,7 @@ libxl_numainfo *libxl_get_numainfo(libxl_ctx *ctx, int 
*nr)
      uint32_t *distance;
      libxl_numainfo *ret = NULL;
      int i, j;
-    unsigned num_nodes;
+    unsigned num_nodes = 0;
if (xc_numainfo(ctx->xch, &num_nodes, NULL, NULL)) {
          LOGE(ERROR, "Unable to determine number of nodes");
diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index fe2c001..2f9bd8e 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -356,7 +356,7 @@ static void signal_int_handler(int signo)
      struct timeval tv;
      int cx_cap = 0, px_cap = 0;
      xc_cputopo_t *cputopo = NULL;
-    unsigned max_cpus;
+    unsigned max_cpus = 0;
if ( xc_cputopoinfo(xc_handle, &max_cpus, NULL) != 0 )
      {
@@ -961,7 +961,7 @@ void scaling_governor_func(int argc, char *argv[])
  void cpu_topology_func(int argc, char *argv[])
  {
      xc_cputopo_t *cputopo = NULL;
-    unsigned max_cpus;
+    unsigned max_cpus = 0;
      int i, rc;
if ( xc_cputopoinfo(xc_handle, &max_cpus, NULL) != 0 )
diff --git a/tools/python/xen/lowlevel/xc/xc.c 
b/tools/python/xen/lowlevel/xc/xc.c
index fbd93db..c77e15b 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1221,7 +1221,7 @@ static PyObject *pyxc_getcpuinfo(XcObject *self, PyObject 
*args, PyObject *kwds)
  static PyObject *pyxc_topologyinfo(XcObject *self)
  {
      xc_cputopo_t *cputopo = NULL;
-    unsigned i, num_cpus;
+    unsigned i, num_cpus = 0;
      PyObject *ret_obj = NULL;
      PyObject *cpu_to_core_obj, *cpu_to_socket_obj, *cpu_to_node_obj;
@@ -1293,7 +1293,7 @@ static PyObject *pyxc_topologyinfo(XcObject *self) static PyObject *pyxc_numainfo(XcObject *self)
  {
-    unsigned i, j, num_nodes;
+    unsigned i, j, num_nodes = 0;
      uint64_t free_heap;
      PyObject *ret_obj = NULL, *node_to_node_dist_list_obj;
      PyObject *node_to_memsize_obj, *node_to_memfree_obj;


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.