[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/PV: don't wrongly hide/expose CPUID.OSXSAVE from/to user mode
On 23/08/16 10:00, Jan Beulich wrote: >>>> On 22.08.16 at 19:30, <andrew.cooper3@xxxxxxxxxx> wrote: >> On 19/08/16 19:07, Andrew Cooper wrote: >>> On 19/08/16 18:09, Andrew Cooper wrote: >>>> On 19/08/16 13:53, Jan Beulich wrote: >>>>> User mode code generally cannot be expected to invoke the PV-enabled >>>>> CPUID Xen supports, and prior to the CPUID levelling changes for 4.7 >>>>> (as well as even nowadays on levelling incapable hardware) such CPUID >>>>> invocations actually saw the host CR4.OSXSAVE value, whereas prior to >>>>> this patch >>>>> - on Intel guest user mode always saw the flag clear, >>>>> - on AMD guest user mode saw the flag set even when the guest kernel >>>>> didn't enable use of XSAVE/XRSTOR. >>>>> Fold in the guest view of CR4.OSXSAVE when setting the levelling MSRs, >>>>> just like we do in other CPUID handling. >>>>> >>>>> To make guest CR4 changes immediately visible via CPUID, also invoke >>>>> ctxt_switch_levelling() from the CR4 write path. >> I was just putting together a patch series to make these changes in a >> more consistent manor, and have found a spanner in the works. >> >> Hiding Xen's view of OSXSAVE from a guests native cpuid will break >> Linux, because of the pile of hacks making up the current PV XSAVE support. > No, because PV Linux doesn't use native CPUID. We clearly should > not hide OSXSAVE in the PV variant (and your earlier series did take > great care to avoid that). Where? There is no distinction for OSXSAVE. The only place in pv_cpuid() which distinguishes native vs emulated cpuid is the X86_FEATURE_MONITOR handling. We distinguish kernel and userspace quite a lot, so the compatibility breakages are only visible to the kernel. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |