[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] xen: memory initialization/balloon fixes (#3)



> From: David Vrabel [mailto:david.vrabel@xxxxxxxxxx]
> Sent: Thursday, September 15, 2011 6:29 AM
> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: Konrad Rzeszutek Wilk
> Subject: [Xen-devel] xen: memory initialization/balloon fixes (#3)
> 
> This set of patches fixes some bugs in the memory initialization under
> Xen and in Xen's memory balloon driver.  They can make 100s of MB of
> additional RAM available (depending on the system/configuration).
> 
> Patch 1 is already applied.
> 
> Patch 2 fixes a bug in patch 1 and should be queued for 3.1 (and along
> with patch 1 considered for 3.0 stable).
> 
> Patch 3 is a bug fix and should be queued for 3.1 and possibly
> queued for the 3.0 stable tree.
> 
> Patches 5 & 6 increase the amount of low memory in 32 bit domains
> started with < 1 GiB of RAM.  Please queue for 3.2
> 
> Patch 7 releases all pages in the initial allocation with PFNs that
> lie within a 1-1 mapping.  This seems correct to me as I think that
> once the 1-1 mapping is set the MFN of the original page is lost so
> it's no longer accessible by the kernel (and it cannot be used by
> another domain

Hi David --

Thanks for your patches!  I am looking at a memory capacity/ballooning
weirdness that I hoped your patchset might fix, but so far it has not.
I'm wondering if there was an earlier fix that you are building upon
and that I am missing.

My problem occurs in a PV domU with an upstream-variant kernel based
on 3.0.5.  The problem is that the total amount of memory as seen
from inside the guest is always substantially less than the amount
of memory seen from outside the guest.  The difference seems to
be fixed within a given boot, but assigning a different vm.cfg mem=
changes the amount.  (For example, the difference D is about 18MB on
a mem=128 boot and about 36MB on a mem=1024 boot.)

Part B of the problem (and the one most important to me) is that
setting /sys/devices/system/xen_memory/xen_memory0/target_kb
to X results in a MemTotal inside the domU (as observed by
"head -1 /proc/meminfo") of X-D.  This can be particularly painful
when X is aggressively small as X-D may result in OOMs.
To use kernel function/variable names (and I observed this with
some debugging code), when balloon_set_new_target(X) is called
totalram_pages gets driven to X-D.

I am using xm, but I don't think this is a toolchain problem because
the problem can be provoked and observed entirely within the guest...
though I suppose it is possible that that the initial "mem=" is the
origin of the problem and the balloon driver just perpetuates
the initial difference.  (I tried xl... same problem... my
Xen/toolset version is 4.1.2-rc1-pre cset 23102)

The descriptions in your patchset sound exactly as if you are
attacking the same problem, but I'm not seeing any improved
result.  Any thoughts or ideas?

Thanks,
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.