[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] Trivial fix for latent bug in page_alloc.c
On Wed, 2005-01-19 at 16:45 +0000, David Hopwood wrote: > Rusty Russell wrote: > > --- xen/common/page_alloc.c.~1~ 2005-01-11 15:35:54.000000000 +1100 > > +++ xen/common/page_alloc.c 2005-01-19 14:51:47.000000000 +1100 > > @@ -203,10 +203,8 @@ > > #define NR_ZONES 2 > > > > /* Up to 2^10 pages can be allocated at once. */ > > -#define MIN_ORDER 0 > > #define MAX_ORDER 10 > > -#define NR_ORDERS (MAX_ORDER - MIN_ORDER + 1) > > -static struct list_head heap[NR_ZONES][NR_ORDERS]; > > +static struct list_head heap[NR_ZONES][MAX_ORDER]; > > This patch is broken because it replaces NR_ORDERS with MAX_ORDER, > not with MAX_ORDER+1. Yes, I should have fixed the comment as well. The +1 version went into CVS though, which I feel is un-C-like, but I don't feel strongly about it. > > @@ -251,17 +249,18 @@ > > int i; > > struct pfn_info *pg; > > > > - if ( unlikely(order < MIN_ORDER) || unlikely(order > MAX_ORDER) ) > > + ASSERT(order >= 0); > > + if ( unlikely(order >= MAX_ORDER) ) > > return NULL; > > Why is it valid to change 'return NULL' to a failed ASSERT? Because noone should be handing a negative number there? In CVS it's now an unsigned anyway. > Also changing > to >= is wrong. Well, it's consistent with the rest of the patch. (sorry for the slow reply, reading mailing list with latency). Thanks, Rusty. -- A bad analogy is like a leaky screwdriver -- Richard Braakman ------------------------------------------------------- This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting Tool for open source databases. Create drag-&-drop reports. Save time by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. Download a FREE copy at http://www.intelliview.com/go/osdn_nl _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |