|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: HVM/PVH Balloon crash
On 29.09.2021 17:31, Elliott Mitchell wrote:
> On Wed, Sep 29, 2021 at 03:32:15PM +0200, Jan Beulich wrote:
>> On 27.09.2021 00:53, Elliott Mitchell wrote:
>>> (XEN) Xen call trace:
>>> (XEN) [<ffff82d0402e8be0>] R
>>> arch/x86/mm/p2m.c#p2m_flush_table+0x240/0x260
>>> (XEN) [<ffff82d0402ec51c>] S p2m_flush_nestedp2m+0x1c/0x30
>>> (XEN) [<ffff82d0402e0528>] S
>>> arch/x86/mm/hap/hap.c#hap_write_p2m_entry+0x378/0x490
>>
>> hap_write_p2m_entry() calling p2m_flush_nestedp2m() suggests that
>> nestedhvm_enabled() was true for the domain. While we will want to
>> fix this, nested virt is experimental (even in current staging),
>> and hence there at least is no security concern.
>
> Copy and paste from the xl.cfg man page:
>
> nestedhvm=BOOLEAN
> Enable or disables guest access to hardware virtualisation
> features, e.g. it allows a guest Operating System to also function
> as a hypervisor. You may want this option if you want to run
> another hypervisor (including another copy of Xen) within a Xen
> guest or to support a guest Operating System which uses hardware
> virtualisation extensions (e.g. Windows XP compatibility mode on
> more modern Windows OS). This option is disabled by default.
>
> "This option is disabled by default." doesn't mean "this is an
> experimental feature with no security support and is likely to crash the
> hypervisor".
Correct, but this isn't the only place to look at. Quoting
SUPPORT.md:
"### x86/Nested HVM
This means providing hardware virtulization support to guest VMs
allowing, for instance, a nested Xen to support both PV and HVM guests.
It also implies support for other hypervisors,
such as KVM, Hyper-V, Bromium, and so on as guests.
Status, x86 HVM: Experimental"
And with an experimental feature you have to expect crashes, no matter
that we'd prefer if you wouldn't hit any.
>> Can you confirm that by leaving nested off you don't run into this
>> (or a similar) issue?
>
> Hypervisor doesn't panic. `xl dmesg` does end up with:
>
> (XEN) p2m_pod_demand_populate: Dom72 out of PoD memory! (tot=524304
> ents=28773031 dom72)
> (XEN) domain_crash called from p2m-pod.c:1233
>
> Which is problematic. maxmem for this domain is set to allow for trading
> memory around, so it is desireable for it to successfully load even when
> its maximum isn't available.
Yet that's still a configuration error (of the guest), not a bug in
Xen.
Thanks for confirming that the issue is nested-hvm related. I'm in the
process of putting together a draft fix, but I'm afraid there's a
bigger underlying issue, so I'm not convinced we would want to go with
that fix even if you were to find that it helps in your case.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |