[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: dom0 PV looping on search_pre_exception_table()
On 10/12/2020 17:35, Manuel Bouyer wrote: > On Thu, Dec 10, 2020 at 05:18:39PM +0000, Andrew Cooper wrote: >> However, Xen finds the mapping not-present when trying to demand-map it, >> hence why the #PF is forwarded to the kernel. >> >> The way we pull guest virtual addresses was altered by XSA-286 (released >> not too long ago despite its apparent age), but *should* have been no >> functional change. I wonder if we accidentally broke something there. >> What exactly are you running, Xen-wise, with the 4.13 version? > It is 4.13.2, with the patch for XSA351 Thanks, >> Given that this is init failing, presumably the issue would repro with >> the net installer version too? > Hopefully yes, maybe even as a domU. But I don't have a linux dom0 to test. > > If you have a Xen setup you can test with > http://ftp.netbsd.org/pub/NetBSD/NetBSD-9.1/amd64/binary/kernel/netbsd-INSTALL_XEN3_DOMU.gz > > note that this won't boot as a dom0 kernel. I've repro'd the problem. When I modify Xen to explicitly demand-map the LDT in the MMUEXT_SET_LDT hypercall, everything works fine. Specifically, this delta: diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 723cc1070f..71a791d877 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -3742,12 +3742,31 @@ long do_mmuext_op( else if ( (curr->arch.pv.ldt_ents != ents) || (curr->arch.pv.ldt_base != ptr) ) { + unsigned int err = 0, tmp; + if ( pv_destroy_ldt(curr) ) flush_tlb_local(); curr->arch.pv.ldt_base = ptr; curr->arch.pv.ldt_ents = ents; load_LDT(curr); + + printk("Probe new LDT\n"); + asm volatile ( + "mov %%es, %[tmp];\n\t" + "1: mov %[sel], %%es;\n\t" + "mov %[tmp], %%es;\n\t" + "2:\n\t" + ".section .fixup,\"ax\"\n" + "3: mov $1, %[err];\n\t" + "jmp 2b\n\t" + ".previous\n\t" + _ASM_EXTABLE(1b, 3b) + : [err] "+r" (err), + [tmp] "=&r" (tmp) + : [sel] "r" (0x3f) + : "memory"); + printk(" => err %u\n", err); } break; } Which stashes %es, explicitly loads init's %ss selector to trigger the #PF and Xen's lazy mapping, then restores %es. (XEN) d1v0 Dropping PAT write of 0007010600070106 (XEN) Probe new LDT (XEN) *** LDT Successful map, slot 0 (XEN) => err 0 (XEN) d1 L1TF-vulnerable L4e 0000000801e88000 - Shadowing And the domain is up and running: # xl list Name ID Mem VCPUs State Time(s) Domain-0 0 2656 8 r----- 44.6 netbsd 1 256 1 -b---- 5.3 (Probably confused about the fact I gave it no disk...) Now, in this case, we find that the virtual address provided for the LDT is mapped, so we successfully copy the mapping into Xen's area, and init runs happily. So the mystery is why the LDT virtual address is not-present when Xen tries to lazily map the LDT at the normal point... Presumably you've got no Meltdown mitigations going on within the NetBSD kernel? (I suspect not, seeing as changing Xen changes the behaviour, but it is worth asking). ~Andrew
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |