|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 2/4] x86/hvm: fix handling of accesses to partial r/o MMIO pages
On 29.04.2025 12:12, Roger Pau Monne wrote:
> The current logic to handle accesses to MMIO pages partially read-only is
> based on the (now removed) logic used to handle accesses to the r/o MMCFG
> region(s) for PVH v1 dom0. However that has issues when running on AMD
> hardware, as in that case the guest linear address that triggered the fault
> is not provided as part of the VM exit. This caused
> mmio_ro_emulated_write() to always fail before calling
> subpage_mmio_write_emulate() when running on AMD and called from an HVM
> context.
>
> Take a different approach and convert the handling of partial read-only
> MMIO page accesses into an HVM MMIO ops handler, as that's the more natural
> way to handle this kind of emulation for HVM domains.
>
> This allows getting rid of hvm_emulate_one_mmio() and it's single call site
> in hvm_hap_nested_page_fault(). As part of the fix r/o MMIO accesses are
> now handled by handle_mmio_with_translation(), re-using the same logic that
> was used for other read-only types part of p2m_is_discard_write(). The
> usage of emulation for faulting p2m_mmio_direct types is limited to
> addresses in the r/o MMIO range. The page present check is dropped as type
> p2m_mmio_direct must have the present bit set in the PTE.
>
> Note a small adjustment is needed to the `pf-fixup` dom0 PVH logic: avoid
> attempting to fixup faults resulting from write accesses to read-only MMIO
> regions, as handling of those accesses is now done by handle_mmio().
>
> Fixes: 33c19df9a5a0 ('x86/PCI: intercept accesses to RO MMIO from dom0s in
> HVM containers')
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
with two nits:
> --- /dev/null
> +++ b/xen/arch/x86/hvm/mmio.c
> @@ -0,0 +1,125 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * MMIO related routines.
> + *
> + * Copyright (c) 2025 Cloud Software Group
> + */
> +
> +#include <xen/io.h>
> +#include <xen/mm.h>
> +
> +#include <asm/p2m.h>
> +
> +static int cf_check subpage_mmio_accept(struct vcpu *v, unsigned long addr)
> +{
> + p2m_type_t t;
> + mfn_t mfn = get_gfn_query_unlocked(v->domain, PFN_DOWN(addr), &t);
> +
> + return !mfn_eq(mfn, INVALID_MFN) && t == p2m_mmio_direct &&
> + subpage_mmio_find_page(mfn);
> +}
> +
> +/*
> + * The guest has read access to those regions, and consequently read accesses
> + * shouldn't fault. However read-modify-write operations may take this path,
> + * so handling of reads is necessary.
> + */
> +static int cf_check subpage_mmio_read(
> + struct vcpu *v, unsigned long addr, unsigned int len, unsigned long
> *data)
> +{
> + struct domain *d = v->domain;
> + unsigned long gfn = PFN_DOWN(addr);
> + p2m_type_t t;
> + mfn_t mfn;
> + struct subpage_ro_range *entry;
> + volatile void __iomem *mem;
> +
> + *data = ~0UL;
> +
> + if ( !len || len > 8 || len & (len - 1) || !IS_ALIGNED(addr, len) )
The & expression wants parenthesizing against the ||s.
> + {
> + gprintk(XENLOG_ERR,
> + "ignoring unaligned read to r/o MMIO subpage %#lx size %u\n",
It's not just unaligned, but also oversized or zero-size now. Maybe better
drop the word?
Both similarly applicable to the write path.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |