[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 3/5] x86/hvm: fix handling of accesses to partial r/o MMIO pages
On Tue, Apr 15, 2025 at 09:32:37AM +0200, Jan Beulich wrote: > On 14.04.2025 18:13, Roger Pau Monné wrote: > > On Mon, Apr 14, 2025 at 05:24:32PM +0200, Jan Beulich wrote: > >> On 14.04.2025 15:53, Roger Pau Monné wrote: > >>> On Mon, Apr 14, 2025 at 08:33:44AM +0200, Jan Beulich wrote: > >>>> On 11.04.2025 12:54, Roger Pau Monne wrote: > Independently (i.e. not for this patch) we may want to actually assert > that npfec.present is set for P2M types where we demand that to always > be the case. (I wouldn't be too surprised if we actually found such an > assertion to trigger.) > > >>>> I'm also concerned of e.g. VT-x'es APIC access MFN, which is > >>>> p2m_mmio_direct. > >>> > >>> But that won't go into hvm_hap_nested_page_fault() when using > >>> cpu_has_vmx_virtualize_apic_accesses (and thus having an APIC page > >>> mapped as p2m_mmio_direct)? > >>> > >>> It would instead be an EXIT_REASON_APIC_ACCESS vmexit which is handled > >>> differently? > >> > >> All true as long as things work as expected (potentially including the > >> guest > >> also behaving as expected). Also this was explicitly only an example I > >> could > >> readily think of. I'm simply wary of handle_mmio_with_translation() now > >> getting things to handle it's not meant to ever see. > > > > How was access to MMIO r/o regions supposed to be handled before > > 33c19df9a5a0 (~2015)? I see that setting r/o MMIO p2m entries was > > added way before to p2m_type_to_flags() and ept_p2m_type_to_flags() > > (~2010), yet I can't figure out how writes would be handled back then > > that didn't result in a p2m fault and crashing of the domain. > > Was that handled at all before said change? Not really AFAICT, hence me wondering how where write accesses to r/o MMIO regions supposed to be handled by (non-priv) domains. Was the expectation that those writes trigger an p2m violation thus crashing the domain? > mmio_ro_do_page_fault() was > (and still is) invoked for the hardware domain only, and quite likely > the need for handling (discarding) writes for PVHv1 had been overlooked > until someone was hit by the lack thereof. I see, I didn't realize r/o MMIO was only handled for PV hardware domains only. I could arguably do the same for HVM in hvm_hap_nested_page_fault(). Not sure whether the subpage stuff is supposed to be functional for domains different than the hardware domain? It seems to be available to the hanrdware domain only for PV guests, while for HVM is available for both PV and HVM domains: is_hardware_domain(currd) || subpage_mmio_write_accept(mfn, gla) In hvm_hap_nested_page_fault(). > > I'm happy to look at other ways to handling this, but given there's > > current logic for handling accesses to read-only regions in > > hvm_hap_nested_page_fault() I think re-using that was the best way to > > also handle accesses to MMIO read-only regions. > > > > Arguably it would already be the case that for other reasons Xen would > > need to emulate an instruction that accesses a read-only MMIO region? > > Aiui hvm_translate_get_page() will yield HVMTRANS_bad_gfn_to_mfn for > p2m_mmio_direct (after all, "direct" means we expect no emulation is > needed; while arguably wrong for the introspection case, I'm not sure > that and pass-through actually go together). Hence it's down to > hvmemul_linear_mmio_access() -> hvmemul_phys_mmio_access() -> > hvmemul_do_mmio_buffer() -> hvmemul_do_io_buffer() -> hvmemul_do_io(), > which means that if hvm_io_intercept() can't handle it, the access > will be forwarded to the responsible DM, or be "processed" by the > internal null handler. > > Given this, perhaps what you do is actually fine. At the same time > note how several functions in hvm/emulate.c simply fail upon > encountering p2m_mmio_direct. These are all REP handlers though, so > the main emulator would then try emulating the insn the non-REP way. I'm open to alternative ways of handling such accesses, just used what seemed more natural in the context of hvm_hap_nested_page_fault(). Emulation of r/o MMIO accesses failing wouldn't be an issue from Xen's perspective, that would "just" result in the guest getting a #GP injected. Would you like me to add some of your reasoning above to the commit message? Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |