[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2 2/5] x86/pv: map_ldt_shadow_page() cleanup
Switch the return value from int to bool, to match its semantics. Switch its parameter from a frame offset to a byte offset (simplifying the sole caller) and allowing for an extra sanity check that the fault is within the LDT limit. Drop the unnecessary gmfn and okay local variables, and correct the gva parameter to be named linear. Rename l1e to gl1e, and simplify the construction of the new pte by simply taking (the now validated) gl1e and ensuring that _PAGE_RW is set. Calculate the pte to be updated outside of the spinlock, which halves the size of the critical region. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx> --- CC: Jan Beulich <JBeulich@xxxxxxxx> CC: Wei Liu <wei.liu2@xxxxxxxxxx> v2: * Adjustments to various comments. --- xen/arch/x86/mm.c | 48 +++++++++++++++++++++++++++++------------------- xen/arch/x86/traps.c | 2 +- xen/include/asm-x86/mm.h | 2 +- 3 files changed, 31 insertions(+), 21 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 79780da..f1983b5 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -667,45 +667,55 @@ static int alloc_segdesc_page(struct page_info *page) } -/* Map shadow page at offset @off. */ -int map_ldt_shadow_page(unsigned int off) +/* + * Map a guest's LDT page (covering the byte at @offset from start of the LDT) + * into Xen's virtual range. Returns true if the mapping changed, false + * otherwise. + */ +bool map_ldt_shadow_page(unsigned int offset) { struct vcpu *v = current; struct domain *d = v->domain; - unsigned long gmfn; struct page_info *page; - l1_pgentry_t l1e, nl1e; - unsigned long gva = v->arch.pv_vcpu.ldt_base + (off << PAGE_SHIFT); - int okay; + l1_pgentry_t gl1e, *pl1e; + unsigned long linear = v->arch.pv_vcpu.ldt_base + offset; BUG_ON(unlikely(in_irq())); + /* + * Hardware limit checking should guarantee this property. NB. This is + * safe as updates to the LDT can only be made by MMUEXT_SET_LDT to the + * current vcpu, and vcpu_reset() will block until this vcpu has been + * descheduled before continuing. + */ + ASSERT((offset >> 3) <= v->arch.pv_vcpu.ldt_ents); + if ( is_pv_32bit_domain(d) ) - gva = (u32)gva; - guest_get_eff_kern_l1e(gva, &l1e); - if ( unlikely(!(l1e_get_flags(l1e) & _PAGE_PRESENT)) ) - return 0; + linear = (uint32_t)linear; - gmfn = l1e_get_pfn(l1e); - page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC); + guest_get_eff_kern_l1e(linear, &gl1e); + if ( unlikely(!(l1e_get_flags(gl1e) & _PAGE_PRESENT)) ) + return false; + + page = get_page_from_gfn(d, l1e_get_pfn(gl1e), NULL, P2M_ALLOC); if ( unlikely(!page) ) - return 0; + return false; - okay = get_page_type(page, PGT_seg_desc_page); - if ( unlikely(!okay) ) + if ( unlikely(!get_page_type(page, PGT_seg_desc_page)) ) { put_page(page); - return 0; + return false; } - nl1e = l1e_from_page(page, l1e_get_flags(l1e) | _PAGE_RW); + pl1e = &gdt_ldt_ptes(d, v)[(offset >> PAGE_SHIFT) + 16]; + l1e_add_flags(gl1e, _PAGE_RW); spin_lock(&v->arch.pv_vcpu.shadow_ldt_lock); - l1e_write(&gdt_ldt_ptes(d, v)[off + 16], nl1e); + l1e_write(pl1e, gl1e); v->arch.pv_vcpu.shadow_ldt_mapcnt++; spin_unlock(&v->arch.pv_vcpu.shadow_ldt_lock); - return 1; + return true; } diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index b93b3d1..f525fa2 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -1112,7 +1112,7 @@ static int handle_gdt_ldt_mapping_fault(unsigned long offset, if ( likely(is_ldt_area) ) { /* LDT fault: Copy a mapping from the guest's LDT, if it is valid. */ - if ( likely(map_ldt_shadow_page(offset >> PAGE_SHIFT)) ) + if ( likely(map_ldt_shadow_page(offset)) ) { if ( guest_mode(regs) ) trace_trap_two_addr(TRC_PV_GDT_LDT_MAPPING_FAULT, diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 5760e05..ec7ce3c 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -553,7 +553,7 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg); int compat_arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void)); int compat_subarch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void)); -int map_ldt_shadow_page(unsigned int); +bool map_ldt_shadow_page(unsigned int); #define NIL(type) ((type *)-sizeof(type)) #define IS_NIL(ptr) (!((uintptr_t)(ptr) + sizeof(*(ptr)))) -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |