[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 2/4] x86/hvm: fix handling of accesses to partial r/o MMIO pages



On Thu, Apr 17, 2025 at 05:38:54PM +0200, Jan Beulich wrote:
> On 17.04.2025 17:25, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -370,7 +370,15 @@ static int hvmemul_do_io(
> >          /* If there is no suitable backing DM, just ignore accesses */
> >          if ( !s )
> >          {
> > -            if ( is_mmio && is_hardware_domain(currd) )
> > +            if ( is_mmio && is_hardware_domain(currd) &&
> > +                 /*
> > +                  * Do not attempt to fixup write accesses to r/o MMIO 
> > regions,
> > +                  * they are expected to be terminated by the null handler
> > +                  * below.
> > +                  */
> > +                 (!rangeset_contains_singleton(mmio_ro_ranges,
> > +                                               PFN_DOWN(addr)) ||
> > +                  dir == IOREQ_READ) )
> 
> These two would better be swapped, for the cheap one to be done first.

Can swap, thanks for pointing out.

> > --- /dev/null
> > +++ b/xen/arch/x86/hvm/mmio.c
> > @@ -0,0 +1,125 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +/*
> > + * MMIO related routines.
> > + *
> > + * Copyright (c) 2025 Cloud Software Group
> > + */
> > +
> > +#include <xen/io.h>
> > +#include <xen/mm.h>
> > +
> > +#include <asm/p2m.h>
> > +
> > +static int cf_check subpage_mmio_accept(struct vcpu *v, unsigned long addr)
> > +{
> > +    p2m_type_t t;
> > +    mfn_t mfn = get_gfn_query_unlocked(v->domain, PFN_DOWN(addr), &t);
> > +
> > +    return !mfn_eq(mfn, INVALID_MFN) && t == p2m_mmio_direct &&
> > +           subpage_mmio_find_page(mfn);
> > +}
> > +
> > +/*
> > + * The guest has read access to those regions, and consequently read 
> > accesses
> > + * shouldn't fault.  However read-modify-write operations may take this 
> > path,
> > + * so handling of reads is necessary.
> > + */
> > +static int cf_check subpage_mmio_read(
> > +    struct vcpu *v, unsigned long addr, unsigned int len, unsigned long 
> > *data)
> > +{
> > +    struct domain *d = v->domain;
> > +    unsigned long gfn = PFN_DOWN(addr);
> > +    p2m_type_t t;
> > +    mfn_t mfn;
> > +    struct subpage_ro_range *entry;
> > +    volatile void __iomem *mem;
> > +
> > +    *data = ~0UL;
> > +
> > +    if ( !IS_ALIGNED(len | addr, len) )
> 
> What's the point of doing the | ? len can't be misaligned with itself?

Hm, it's the same form that's used in mmio_ro_emulated_write(), I
assumed it was to catch illegal access lengths, like 3.

> > --- a/xen/arch/x86/include/asm/mm.h
> > +++ b/xen/arch/x86/include/asm/mm.h
> > @@ -554,6 +554,18 @@ int cf_check mmio_ro_emulated_write(
> >      enum x86_segment seg, unsigned long offset, void *p_data,
> >      unsigned int bytes, struct x86_emulate_ctxt *ctxt);
> >  
> > +/* r/o MMIO subpage access handlers. */
> > +struct subpage_ro_range {
> > +    struct list_head list;
> > +    mfn_t mfn;
> > +    void __iomem *mapped;
> > +    DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN);
> > +};
> > +struct subpage_ro_range *subpage_mmio_find_page(mfn_t mfn);
> > +void __iomem *subpage_mmio_map_page(struct subpage_ro_range *entry);
> 
> I notice you didn't move the __iomem, which - as indicated - I agree with,
> but Andrew didn't. Did you sort this with him privately?

I will raise with him, seeing as he didn't reply to your last email, I
assumed he was OK with the reasoning?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.