[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Load increase after memory upgrade (part2)
On Thu, Jun 14, 2012 at 08:07:55AM +0100, Jan Beulich wrote: > >>> On 13.06.12 at 18:55, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> > >>> wrote: > > @@ -1576,7 +1578,11 @@ static void *__vmalloc_area_node(struct vm_struct > > *area, gfp_t gfp_mask, > > struct page **pages; > > unsigned int nr_pages, array_size, i; > > gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; > > - > > + gfp_t dma_mask = gfp_mask & (__GFP_DMA | __GFP_DMA32); > > + if (xen_pv_domain()) { > > + if (dma_mask == (__GFP_DMA | __GFP_DMA32)) > > As said in an earlier reply - without having any place that would > ever set both flags at once, this whole conditional is meaningless. > In our code - which I suppose is where you cloned this from - we Yup. > set GFP_VMALLOC32 to such a value for 32-bit kernels (which > otherwise would merely use GFP_KERNEL, and hence not trigger Ah, let me double check. Thanks for looking out for this. > the code calling xen_limit_pages_to_max_mfn()). I don't recall > though whether Carsten's problem was on a 32- or 64-bit kernel. > > Jan > > > + gfp_mask &= ~(__GFP_DMA | __GFP_DMA32); > > + } > > nr_pages = (area->size - PAGE_SIZE) >> PAGE_SHIFT; > > array_size = (nr_pages * sizeof(struct page *)); > > > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |