[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] VT-x: simplify/clarify vmx_load_pdptrs()
* Guests outside of long mode can't have PCID enabled. Drop the respective check to make more obvious that there's no security issue (from potentially accessing past the mapped page's boundary). * Only the low 32 bits of CR3 are relevant outside of long mode, even if they remained unchanged after leaving that mode. * Drop the unnecessary and badly typed local variable p. * Don't open-code hvm_long_mode_active() (and extend this to the related nested VT-x code). * Constify guest_pdptes to clarify that we're only reading from the page. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- This is intentionally not addressing any of the other shortcomings of the function, as was done by the previously posted https://lists.xenproject.org/archives/html/xen-devel/2018-07/msg01790.html. This will need to be the subject of a further change. --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1312,17 +1312,16 @@ static void vmx_set_interrupt_shadow(str static void vmx_load_pdptrs(struct vcpu *v) { - unsigned long cr3 = v->arch.hvm.guest_cr[3]; - uint64_t *guest_pdptes; + uint32_t cr3 = v->arch.hvm.guest_cr[3]; + const uint64_t *guest_pdptes; struct page_info *page; p2m_type_t p2mt; - char *p; /* EPT needs to load PDPTRS into VMCS for PAE. */ - if ( !hvm_pae_enabled(v) || (v->arch.hvm.guest_efer & EFER_LMA) ) + if ( !hvm_pae_enabled(v) || hvm_long_mode_active(v) ) return; - if ( (cr3 & 0x1fUL) && !hvm_pcid_enabled(v) ) + if ( cr3 & 0x1f ) goto crash; page = get_page_from_gfn(v->domain, cr3 >> PAGE_SHIFT, &p2mt, P2M_UNSHARE); @@ -1332,14 +1331,12 @@ static void vmx_load_pdptrs(struct vcpu * queue, but this is the wrong place. We're holding at least * the paging lock */ gdprintk(XENLOG_ERR, - "Bad cr3 on load pdptrs gfn %lx type %d\n", + "Bad cr3 on load pdptrs gfn %"PRIx32" type %d\n", cr3 >> PAGE_SHIFT, (int) p2mt); goto crash; } - p = __map_domain_page(page); - - guest_pdptes = (uint64_t *)(p + (cr3 & ~PAGE_MASK)); + guest_pdptes = __map_domain_page(page) + (cr3 & ~PAGE_MASK); /* * We do not check the PDPTRs for validity. The CPU will do this during @@ -1356,7 +1353,7 @@ static void vmx_load_pdptrs(struct vcpu vmx_vmcs_exit(v); - unmap_domain_page(p); + unmap_domain_page(guest_pdptes); put_page(page); return; --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1234,7 +1234,7 @@ static void virtual_vmentry(struct cpu_u paging_update_paging_modes(v); if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) && - !(v->arch.hvm.guest_efer & EFER_LMA) ) + !hvm_long_mode_active(v) ) vvmcs_to_shadow_bulk(v, ARRAY_SIZE(gpdpte_fields), gpdpte_fields); regs->rip = get_vvmcs(v, GUEST_RIP); @@ -1437,7 +1437,7 @@ static void virtual_vmexit(struct cpu_us sync_exception_state(v); if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) && - !(v->arch.hvm.guest_efer & EFER_LMA) ) + !hvm_long_mode_active(v) ) shadow_to_vvmcs_bulk(v, ARRAY_SIZE(gpdpte_fields), gpdpte_fields); /* This will clear current pCPU bit in p2m->dirty_cpumask */
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |