|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen stable-4.4] xmalloc: handle correctly page allocation when align > size
commit 67fda497099b7451fb2dec826af120bf6333bc67
Author: Julien Grall <julien.grall@xxxxxxxxxx>
AuthorDate: Fri Mar 14 17:29:54 2014 +0100
Commit: Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Mar 14 17:29:54 2014 +0100
xmalloc: handle correctly page allocation when align > size
When align is superior to size, we need to retrieve the order from
align during multiple page allocation. I guess it was the goal of the commit
fb034f42 "xmalloc: make close-to-PAGE_SIZE allocations more efficient".
Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
Acked-by: Keir Fraser <keir@xxxxxxx>
master commit: ac2cba2901779f66bbfab298faa15c956e91393a
master date: 2014-03-10 14:40:50 +0100
---
xen/common/xmalloc_tlsf.c | 5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/xen/common/xmalloc_tlsf.c b/xen/common/xmalloc_tlsf.c
index d3bdfa7..a5769c9 100644
--- a/xen/common/xmalloc_tlsf.c
+++ b/xen/common/xmalloc_tlsf.c
@@ -527,11 +527,10 @@ static void xmalloc_pool_put(void *p)
static void *xmalloc_whole_pages(unsigned long size, unsigned long align)
{
- unsigned int i, order = get_order_from_bytes(size);
+ unsigned int i, order;
void *res, *p;
- if ( align > size )
- get_order_from_bytes(align);
+ order = get_order_from_bytes(max(align, size));
res = alloc_xenheap_pages(order, 0);
if ( res == NULL )
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.4
_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |