[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen staging] x86/shadow: adjust monitor table prealloc amount



commit fcd373dc72ca30e6cd559943c09dcd13a79d8c6a
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Tue Feb 24 09:16:43 2026 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Feb 24 10:29:58 2026 +0100

    x86/shadow: adjust monitor table prealloc amount
    
    While 670d6b908ff2 ('x86 shadow: Move the shadow linear mapping for
    n-on-3-on-4 shadows so') bumped the amount by one too little for the
    32-on-64 case (which luckily was dead code, and hence a bump wasn't
    necessary in the first place), 0b841314dace ('x86/shadow:
    sh_{make,destroy}_monitor_table() are "even more" HVM-only'), dropping
    the dead code, then didn't adjust the amount back. Yet even the original
    amount was too high in certain cases. Switch to pre-allocating just as
    much as is going to be needed.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
 xen/arch/x86/mm/shadow/hvm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index a7db8addc0..3565c7940d 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -747,7 +747,7 @@ mfn_t sh_make_monitor_table(const struct vcpu *v, unsigned 
int shadow_levels)
     ASSERT(!pagetable_get_pfn(v->arch.hvm.monitor_table));
 
     /* Guarantee we can get the memory we need */
-    if ( !shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS) )
+    if ( !shadow_prealloc(d, SH_type_monitor_table, shadow_levels < 4 ? 3 : 1) 
)
         return INVALID_MFN;
 
     m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);
--
generated by git-patchbot for /home/xen/git/xen.git#staging



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.