[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] Trivial fix for latent bug in page_alloc.c
Was reading through page_alloc.c, and the "find smallest order" loop assumes MIN_ORDER is 0. Easiest fix is to get rid of "MIN_ORDER" and hence NR_ORDERS, and simplify code. Rusty. --- xen/common/page_alloc.c.~1~ 2005-01-11 15:35:54.000000000 +1100 +++ xen/common/page_alloc.c 2005-01-19 14:51:47.000000000 +1100 @@ -203,10 +203,8 @@ #define NR_ZONES 2 /* Up to 2^10 pages can be allocated at once. */ -#define MIN_ORDER 0 #define MAX_ORDER 10 -#define NR_ORDERS (MAX_ORDER - MIN_ORDER + 1) -static struct list_head heap[NR_ZONES][NR_ORDERS]; +static struct list_head heap[NR_ZONES][MAX_ORDER]; static unsigned long avail[NR_ZONES]; @@ -220,7 +218,7 @@ memset(avail, 0, sizeof(avail)); for ( i = 0; i < NR_ZONES; i++ ) - for ( j = 0; j < NR_ORDERS; j++ ) + for ( j = 0; j < MAX_ORDER; j++ ) INIT_LIST_HEAD(&heap[i][j]); /* Pages that are free now go to the domain sub-allocator. */ @@ -251,17 +249,18 @@ int i; struct pfn_info *pg; - if ( unlikely(order < MIN_ORDER) || unlikely(order > MAX_ORDER) ) + ASSERT(order >= 0); + if ( unlikely(order >= MAX_ORDER) ) return NULL; spin_lock(&heap_lock); /* Find smallest order which can satisfy the request. */ - for ( i = order; i < NR_ORDERS; i++ ) + for ( i = order; i < MAX_ORDER; i++ ) if ( !list_empty(&heap[zone][i]) ) break; - if ( i == NR_ORDERS ) + if ( i == MAX_ORDER ) goto no_memory; pg = list_entry(heap[zone][i].next, struct pfn_info, list); -- A bad analogy is like a leaky screwdriver -- Richard Braakman ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |