[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 3/3] x86: switch to eager fpu save / restore
On 04.03.2024 10:13, Fouad Hilly wrote: > From: Wei Liu <wei.liu2@xxxxxxxxxx> > > Previously FPU is lazily switched. Due to the fact that a malicious > guest can speculatively read the not yet switched out register states, > we need to eagerly switch FPU states when a domain is scheduled to > run. The speculation aspect is already covered as of XSA-267, and hence imo cannot be used as a justification here. > In the new world, Xen will eagerly switch FPU context in the > scheduler. Xen itself won't set CR0.TS other than for the purpose of > servicing a PV guest. > > The following things are done: > > 1. Xen will only set and clear CR0.TS on behalf of a PV guest. Any #NM > received in Xen should only be delivered to the running PV guest. > > 2. Xen no longer causes vmexit for #NM for HVM guests when nested HVM > is not in use. > > 3. When nested HVM is in use, Xen needs to trap #NM if specified by > the L1 hypervisor, but all #NM handling is left to L1 hypervisor > to deal with. > > 4. Xen saves and restores FPU states wherever it needs to. The > following places are modified: > 1. Scheduling in and out a guest; > 2. Calling EFI runtime service; > 3. ACPI reset; > 4. x86 insn emulator fpu insn emulation. > > 5. Treat FPU as always initialised. Adjust following components: > 1. HVM vcpu context save / load code; > 2. arch_{get,set}_info_guest; > 3. VLAPIC code. > > 6. Delete lazy FPU handling code. This wants splitting up. I'm actually puzzled that step 6 does not include purging of e.g. opt_eager_fpu. Converting that to "always true" would seem to be the one first step with a functional change. All further steps would be removal of then-dead code. > Strip XCR0 and IA32_XSS manipulation from __context_switch. We need to > be able to zero out previously used state components. Push everything > into fpu_xrstor as that's the most suitable place. > > Tested on AMD with PKU disabled and Intel, no performance degradation. I dare to at least question whether this is going to hold once we support one of the huge state components (AMX patches have been pending for years). Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |