|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH] page_alloc: use first half of higher order chunks when halving
On Tue, Mar 25, 2014 at 01:19:22PM +0100, Tim Deegan wrote:
> At 13:22 +0200 on 25 Mar (1395750124), Matt Wilson wrote:
> > From: Matt Rushton <mrushton@xxxxxxxxxx>
> >
> > This patch makes the Xen heap allocator use the first half of higher
> > order chunks instead of the second half when breaking them down for
> > smaller order allocations.
> >
> > Linux currently remaps the memory overlapping PCI space one page at a
> > time. Before this change this resulted in the mfns being allocated in
> > reverse order and led to discontiguous dom0 memory. This forced dom0
> > to use bounce buffers for doing DMA and resulted in poor performance.
>
> This seems like something better fixed on the dom0 side, by asking
> explicitly for contiguous memory in cases where it makes a difference.
> On the Xen side, this change seems harmless, but we might like to keep
> the explicitly reversed allocation on debug builds, to flush out
> guests that rely on their memory being contiguous.
Yes, I think that retaining the reverse allocation on debug builds is
fine. I'd like Konrad's take on if it's better or possible to fix this
on the Linux side.
> > This change more gracefully handles the dom0 use case and returns
> > contiguous memory for subsequent allocations.
> >
> > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
> > Cc: Keir Fraser <keir@xxxxxxx>
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> > Cc: Tim Deegan <tim@xxxxxxx>
> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > Signed-off-by: Matt Rushton <mrushton@xxxxxxxxxx>
> > Signed-off-by: Matt Wilson <msw@xxxxxxxxxx>
> > ---
> > xen/common/page_alloc.c | 5 +++--
> > 1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> > index 601319c..27e7f18 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -677,9 +677,10 @@ static struct page_info *alloc_heap_pages(
> > /* We may have to halve the chunk a number of times. */
> > while ( j != order )
> > {
> > - PFN_ORDER(pg) = --j;
> > + struct page_info *pg2;
> > + pg2 = pg + (1 << --j);
> > + PFN_ORDER(pg) = j;
> > page_list_add_tail(pg, &heap(node, zone, j));
> > - pg += 1 << j;
>
> AFAICT this uses the low half (pg) for the allocation _and_ puts it on
> the freelist, and just leaks the high half (pg2). Am I missing something?
Argh, oops. this is totally my fault (not Matt R.'s). I ported the
patch out of our development tree incorrectly. The code should have
read:
while ( j != order )
{
struct page_info *pg2;
pg2 = pg + (1 << --j);
PFN_ORDER(pg2) = j;
page_list_add_tail(pg2, &heap(node, zone, j));
}
Apologies to Matt for my mangling of his patch (which also already had
the correct blank line per Andy's comment).
--msw
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |