[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [xen-unstable test] 16788: regressions - FAIL
>>> On 04.03.13 at 12:45, Tim Deegan <tim@xxxxxxx> wrote: > At 11:29 +0000 on 04 Mar (1362396574), Jan Beulich wrote: >> >>> On 04.03.13 at 12:12, Tim Deegan <tim@xxxxxxx> wrote: >> > At 10:57 +0000 on 04 Mar (1362394626), Jan Beulich wrote: >> >> Question - is there really no better way than this to deal with >> >> cross page emulated writes? >> > >> > There probably is -- all that's wanted here is two mappings that are >> > guaranteed to be contiguous in virtual address. The LDT slots were >> > available; it would have been better to document this use where >> > LDT_VIRT_START is defined, though. >> > >> > AFAICT, we'll need to: >> > - cause some equivalent pair of PTEs to be available (either per-cpu or >> > per-vcpu); or >> > - add a map_two_domain_pages() function to use the mapcache for this; or >> >> Would vmap() do? Or do we expect this code path to be performance >> sensitive? > > I can't really tell what vmap() is for, since it seems to have no > documentation, and the checkin description just says "implement vmap()". > But it looks plausible, and emulating page-crossing writes oughtn't to > be performance-sensitive. Like this then (compile tested only so far): --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -4611,7 +4611,6 @@ static void *emulate_map_dest(struct vcp u32 bytes, struct sh_emulate_ctxt *sh_ctxt) { - unsigned long offset; void *map = NULL; sh_ctxt->mfn1 = emulate_gva_to_mfn(v, vaddr, sh_ctxt); @@ -4643,6 +4642,8 @@ static void *emulate_map_dest(struct vcp } else { + unsigned long mfns[2]; + /* Cross-page emulated writes are only supported for HVM guests; * PV guests ought to know better */ if ( !is_hvm_vcpu(v) ) @@ -4660,17 +4661,11 @@ static void *emulate_map_dest(struct vcp /* Cross-page writes mean probably not a pagetable */ sh_remove_shadows(v, sh_ctxt->mfn2, 0, 0 /* Slow, can fail */ ); - /* Hack: we map the pages into the vcpu's LDT space, since we - * know that we're not going to need the LDT for HVM guests, - * and only HVM guests are allowed unaligned writes. */ - ASSERT(is_hvm_vcpu(v)); - map = (void *)LDT_VIRT_START(v); - offset = l1_linear_offset((unsigned long) map); - l1e_write(&__linear_l1_table[offset], - l1e_from_pfn(mfn_x(sh_ctxt->mfn1), __PAGE_HYPERVISOR)); - l1e_write(&__linear_l1_table[offset + 1], - l1e_from_pfn(mfn_x(sh_ctxt->mfn2), __PAGE_HYPERVISOR)); - flush_tlb_local(); + mfns[0] = mfn_x(sh_ctxt->mfn1); + mfns[1] = mfn_x(sh_ctxt->mfn2); + map = vmap(mfns, 2); + if ( !map ) + return MAPPING_UNHANDLEABLE; map += (vaddr & ~PAGE_MASK); } @@ -4748,14 +4743,8 @@ static void emulate_unmap_dest(struct vc if ( unlikely(mfn_valid(sh_ctxt->mfn2)) ) { - unsigned long offset; paging_mark_dirty(v->domain, mfn_x(sh_ctxt->mfn2)); - /* Undo the hacky two-frame contiguous map. */ - ASSERT(((unsigned long) addr & PAGE_MASK) == LDT_VIRT_START(v)); - offset = l1_linear_offset((unsigned long) addr); - l1e_write(&__linear_l1_table[offset], l1e_empty()); - l1e_write(&__linear_l1_table[offset + 1], l1e_empty()); - flush_tlb_all(); + vunmap(addr - ((unsigned long)addr & ~PAGE_MASK)); } else sh_unmap_domain_page(addr); Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |