[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/pod: Do not fragment PoD memory allocations



On Thu, Jan 28, 2021 at 11:19:41AM +0100, Jan Beulich wrote:
> On 27.01.2021 23:28, Elliott Mitchell wrote:
> > On Wed, Jan 27, 2021 at 09:03:32PM +0000, Andrew Cooper wrote:
> >> So.?? What *should* happen is that if QEMU/OVMF dirties more memory than
> >> exists in the PoD cache, the domain gets terminated.
> >>
> >> Irrespective, Xen/dom0 dying isn't an expected consequence of any normal
> >> action like this.
> >>
> >> Do you have a serial log of the crash??? If not, can you set up a crash
> >> kernel environment to capture the logs, or alternatively reproduce the
> >> issue on a different box which does have serial?
> > 
> > No, I don't.  I'm setup to debug ARM failures, not x86 ones.
> 
> Then alternatively can you at least give conditions that need to
> be met to observe the problem, for someone to repro and then
> debug? (The less complex the better, of course.)

Multiple prior messages have included statements of what I believed to be
the minimal case to reproduce.  Presently I believe the minimal
constraints are, maxmem >= host memory, memory < free Xen memory, type
HVM.  A minimal kr45hme.cfg file:

type = "hvm"
memory = 1024
maxmem = 1073741824

I suspect maxmem > free Xen memory may be sufficient.  The instances I
can be certain of have been maxmem = total host memory *7.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@xxxxxxx  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.