|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH] page_alloc: use first half of higher order chunks when halving
On Tue, Mar 25, 2014 at 11:44:19AM +0000, Andrew Cooper wrote:
> On 25/03/14 11:22, Matt Wilson wrote:
> > From: Matt Rushton <mrushton@xxxxxxxxxx>
> >
> > This patch makes the Xen heap allocator use the first half of higher
> > order chunks instead of the second half when breaking them down for
> > smaller order allocations.
> >
> > Linux currently remaps the memory overlapping PCI space one page at a
> > time. Before this change this resulted in the mfns being allocated in
> > reverse order and led to discontiguous dom0 memory. This forced dom0
> > to use bounce buffers for doing DMA and resulted in poor performance.
> >
> > This change more gracefully handles the dom0 use case and returns
> > contiguous memory for subsequent allocations.
> >
> > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
> > Cc: Keir Fraser <keir@xxxxxxx>
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> > Cc: Tim Deegan <tim@xxxxxxx>
> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > Signed-off-by: Matt Rushton <mrushton@xxxxxxxxxx>
> > Signed-off-by: Matt Wilson <msw@xxxxxxxxxx>
>
> How does dom0 work out that it is safe to join multiple pfns into a dma
> buffer without the swiotlb?
I'm not familiar enough with how this works to say. Perhaps Matt R.
can chime in during his day. My guess is that xen_swiotlb_alloc_coherent()
avoids allocating a contiguous region if the pages allocated already
happen to be physically contiguous.
Konrad, can you enlighten us? The setup code in question that does the
remapping one page at a time is in arch/x86/xen/setup.c.
> > ---
> > xen/common/page_alloc.c | 5 +++--
> > 1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> > index 601319c..27e7f18 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -677,9 +677,10 @@ static struct page_info *alloc_heap_pages(
> > /* We may have to halve the chunk a number of times. */
> > while ( j != order )
> > {
> > - PFN_ORDER(pg) = --j;
> > + struct page_info *pg2;
>
> At the very least, Xen style mandates a blank line after this variable
> declaration.
Ack.
> ~Andrew
>
> > + pg2 = pg + (1 << --j);
> > + PFN_ORDER(pg) = j;
> > page_list_add_tail(pg, &heap(node, zone, j));
> > - pg += 1 << j;
> > }
> >
> > ASSERT(avail[node][zone] >= request);
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |