[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Xen Security Advisory 466 v3 (CVE-2024-53241) - Xen hypercall page unsafe against speculative attacks
On 06.01.2025 18:19, David Woodhouse wrote: > On Thu, 2025-01-02 at 15:16 +0100, Jürgen Groß wrote: >> On 02.01.25 15:06, David Woodhouse wrote: >>> On Thu, 2025-01-02 at 15:02 +0100, Jürgen Groß wrote: >>>>> Are you suggesting that you're able to enable the CPU-specific CFI >>>>> protections before you even know whether it's an Intel or AMD CPU? >>>> >>>> Not before that, but maybe rather soon afterwards. And the hypercall page >>>> needs to be decommissioned before the next hypercall is happening. The >>>> question >>>> is whether we have a hook in place to do that switch between cpu >>>> identification >>>> and CFI enabling. >>> >>> Not sure that's how I'd phrase it. Even if we have to add a hook at the >>> right time to switch from the Xen-populated hypercall page to the one >>> filled in by Linux, the question is whether adding that hook is simpler >>> than all this early static_call stuff that's been thrown together, and >>> the open questions about the 64-bit latching. >> >> This is a valid question, yes. My first version of these patches didn't >> work with static_call, but used the paravirt call patching mechanism >> replacing an indirect call with a direct one via ALTERNATIVEs. That >> version was disliked by some involved x86 maintainers, resulting in the >> addition of the early static_call update mechanism. >> >> One thing to mention regarding the 64-bit latching: what would you do >> with HVM domains? Those are setting up the hypercall page rather late. >> In case the kernel would use CFI, enabling would happen way before the >> guest would issue any hypercall, so I guess the latching needs to happen >> by other means anyway. Or would you want to register the hypercall page >> without ever intending to use it? > > With xen_no_vector_callback on the command line, the hypervisor doesn't > realise that the guest is 64-bit until long after all the CPUs are > brought up. > > It does boot (and hey, QEMU does get this right!) but I'm still > concerned that all those shared structures are 32-bit for that long. I > do think the guest kernel should either set the hypercall page, or > HVM_PARAM_CALLBACK_IRQ, as early as possible. How about we adjust the behavior in Xen instead: We could latch the size on every hypercall, making sure to invoke update_domain_wallclock_time() only when the size actually changed (to not incur the extra overhead), unless originating from the two places the latching is currently done at (to avoid altering existing behavior)? Then again latching more frequently (as suggested above or by any other model) also comes with the risk of causing issues, at the very least for "exotic" guests. E.g. with two vCPU-s in different modes, we'd ping-pong the guest between both formats then. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |