[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] kernel log flooded with: xen_balloon: reserve_additional_memory: add_memory() failed: -17

On 2012-12-21 20:23, Konrad Rzeszutek Wilk wrote:
On Wed, Dec 19, 2012 at 10:55:49AM +0000, James Dingwall wrote:
On 2012-12-19 08:47, James Dingwall wrote:

Hi Konrad,

Thanks for your interest in this problem, sorry for the delay in replying but today is the first day I've had access to the machine to do more testing.

With some further investigation we have determined that the
different BIOS version does not seem to be a factor and the key
point is actually the Xen command line.  The reason that we had max:
specified is that without it we could not boot the kernel/xen
combination on an AMD platform.  I will do some further testing to

What happend? Did you report it in another email on this mailing list?
I reported this on xen-users

see what the result of dom0_mem=6144m,max:6144m as suggested
http://wiki.xen.org/wiki/Xen_Best_Practices gets us.

placeholder xsave=0 iommu=0 console=vga,com2 com2=115200,8n1

I think you don't need xsave=0 anymore? It was only needed with
a Fedora stock kernel (and the underlaying issue I believe is
now fixed).
IIRC the stock Ubuntu Precise kernel needed this but as we are running our own build now it is probably unnecessary.


If you were to run this with 'loglevel=8 debug' with the
dom0_mem=6144m and dom0_mem=max:6144m could you send both
'xl dmesg' and 'dmesg' outputs?
Output attached. Thicky question, loglevel=8 debug was for the Xen command line?


Attachment: 4096m-dmesg.txt
Description: Text document

Attachment: 4096m-xm-dmesg.txt
Description: Text document

Attachment: 4096m-xm-info.txt
Description: Text document

Attachment: txtbqrHUPmyHm.txt
Description: Text document

Attachment: txt_1MEVWOdf8.txt
Description: Text document

Attachment: txtjdmTjPV_Yn.txt
Description: Text document

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.