[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] BUG: failed to save x86 HVM guest with 1TB ram



On 08/09/15 03:28, wangxin (U) wrote:

The check serves a dual purpose.  In the legacy case, it is to avoid
clobbering the upper bits of pfn information with pfn type information
for 32bit toolstacks;  any PFN above 2^28 would have type information
clobbering the upper bits.  This has been mitigated somewhat in
migration v2, as pfns are strictly 64bit values, still using the upper 4
bits for type information, allowing 60 bits for the PFN itself.

The second purpose is just as a limit on toolstack resources. Migration
requires allocating structures which scale linearly with the size of the
VM; the biggest of which would be ~1GB for the p2m. Added to this is
  >1GB for the m2p, and suddenly a 32bit toolstack process is looking
scarce on RAM.

During the development of migration v2, I didn't spend any time
considering if or how much it was sensible to lift the restriction by,
so the check was imported wholesale from the legacy code.

For now, I am going to say that it simply doesn't work.  Simply upping
the limit is only a stopgap measure; an HVM guest can still mess this up
Will the stopgap measure work in xen 4.5?

I don't know - try it.


by playing physmap games and mapping a page of ram at a really high
(guest) physical address.  Longterm, we need hypervisor support for
getting a compressed view of guest physical address space, so toolstack
side resources are proportional to the amount of RAM given to the guest,
not to how big a guest decides to make its physmap.
Is that in your further work?

It is on the list, but no idea if/how it would be done at the moment.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.