[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] RE: [PATCH] less verbose tmem, was: tmem_relinquish_page: failing order=<n>
(apologies if this is a repeat; having email problems) > From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx] > Subject: Re: [PATCH] less verbose tmem, was: tmem_relinquish_page: > failing order=<n> > > >>> Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> 06.01.10 18:13 >>> > >The message is relevant if any code calling alloc_heap_pages() > >for order>0 isn't able to fallback to order=0. All usages today > >in Xen can fallback, but future calls may not, so I'd prefer > >to keep the printk there at least in xen-unstable. BUT there's > > What makes you think so? Iirc there are several xmalloc()-s of more > than a page in size (which ultimately will call alloc_heap_pages()), > and those usually don't have a fallback. I don't know this for a fact, but had discussed it in xen-devel some time ago (maybe over a year ago). Since tmem doesn't do anything until at least one tmem-modified guest uses it, the only issue is if launching a subsequent domain requires an allocation of order>0. Since Xen doesn't have any kind of memory defragmenter, any such requirement is perilous at best. > >no reason for the message to be logged in a released Xen > >so here's a patch to ifndef NDEBUG the printk. > > Even in a debug build this may be really annoying: On NUMA machines, > the dma_bitsize mechanism (to avoid exhausting DMA memory) can > cause close to 2000 of these messages when allocating Dom0's > initial memory (in the absence of a severely restricting dom0_mem, > and with node 0 spanning 4G or more). This makes booting > unacceptably slow, especially when using a graphical console mode. OK, I can take the patch one step further by only doing the printk if tmem has been used at least once. Let me take a look at that. Dan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |