[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] tools/libxc: Fix error checking for xc_get_{cpu, node}map_size() callers
On 12/12/2013 14:56, Dario Faggioli wrote: > On gio, 2013-12-12 at 14:24 +0000, Ian Campbell wrote: >> On Wed, 2013-12-11 at 15:47 +0000, Andrew Cooper wrote: >>> c/s 2e82c18cd850592ae9a1f682eb93965a868b5f2f changed the error returns of >>> xc_get_{cpu,node}map_size() to now include returning -1. This invalidated >>> the >>> error checks from callers, which expected 0 to be the only error case. >> I don't think 0 is a valid error value any more. Neither xc_get_max_cpus >> nor xc_get_max_nodes can return 0 and the map_size functions will round >> to 1 or more. >> > Yep, I confirm that, after that changeset, neither > xc_get_max_{cpus,nodes}() nor xc_get_{cpu,node}map_size() return 0 as an > error anymore. Zero might not be "the error condition" any more, but it is certainly an error from any of these functions (and possible as xc_get_max_{cpus,nodes}() is capable of returning 0 if Xen hands back -1 for physinfo.max_{cpu,node}_id) > >> So these could all be "< 0" tests think. >> > Indeed. > > Anyway, looks like, while I fixed the callers of the xx_get_max_xx > things in that very comment, I didn't do the same for the xx_get_*map_xx > ones. Weird, as ISTR doing so too... :-/ > > Anyway, thanks to Coverity for caching this and to Andrew for the patch. > > I'll reply to v2 (if you're posting it) but, with the conditions > converted to "< 0", this can have my: > > Reviewed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx> > > Regards, > Dario > xc_{cpu/node}map_alloc() must strictly still be "<= 0" checks to avoid the issue where calloc(1, 0) returns a non-NULL pointer. Currently, I am of the opinion that the patch is better as is, than changing some of the checks to being strictly "< 0" ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |