[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/HVM: restrict use of pinned cache attributes as well as associated flushing



On Fri, May 16, 2025 at 08:54:52AM +0200, Jan Beulich wrote:
> On 15.05.2025 12:04, Roger Pau Monné wrote:
> > On Wed, Mar 22, 2023 at 07:50:09AM +0100, Jan Beulich wrote:
> >> We don't permit use of uncachable memory types elsewhere unless a domain
> >> meets certain criteria. Enforce this also during registration of pinned
> >> cache attribute ranges.
> >>
> >> Furthermore restrict cache flushing to just uncachable range registration.
> >> While there, also
> >> - take CPU self-snoop as well as IOMMU snoop into account (albeit the
> >>   latter still is a global property rather than a per-domain one),
> >> - avoid flushes when the domain isn't running yet (which ought to be the
> >>   common case).
> >>
> >> Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> ---
> >> At the expense of yet larger a diff it would be possible to get away
> >> without any "goto", by moving the whole "new entry" handling into the
> >> switch(). Personally I'd prefer that, but the larger diff may be
> >> unwelcome.
> >>
> >> I have to admit that I can't spot what part of epte_get_entry_emt() the
> >> comment refers to that is being deleted. The function does use
> >> hvm_get_mem_pinned_cacheattr(), yes, but there's nothing there that talks
> >> about cache flushes (and their avoiding) in any way.
> >>
> >> Is it really sensible to add/remove ranges once the guest is already
> >> running? (If it is, limiting the scope of the flush would be nice, but
> >> would require knowing dirtyness for the domain wrt the caches, which
> >> currently we don't track.)
> >>
> >> This is kind of amending XSA-428.
> >>
> >> --- a/xen/arch/x86/hvm/mtrr.c
> >> +++ b/xen/arch/x86/hvm/mtrr.c
> >> @@ -589,6 +589,7 @@ int hvm_set_mem_pinned_cacheattr(struct
> >>  {
> >>      struct hvm_mem_pinned_cacheattr_range *range, *newr;
> >>      unsigned int nr = 0;
> >> +    bool flush = false;
> >>      int rc = 1;
> >>  
> >>      if ( !is_hvm_domain(d) )
> >> @@ -612,31 +613,35 @@ int hvm_set_mem_pinned_cacheattr(struct
> >>  
> >>                  type = range->type;
> >>                  call_rcu(&range->rcu, free_pinned_cacheattr_entry);
> >> -                p2m_memory_type_changed(d);
> >>                  switch ( type )
> >>                  {
> >> -                case X86_MT_UCM:
> >> +                case X86_MT_WB:
> >> +                case X86_MT_WP:
> >> +                case X86_MT_WT:
> >>                      /*
> >> -                     * For EPT we can also avoid the flush in this case;
> >> -                     * see epte_get_entry_emt().
> >> +                     * Flush since we don't know what the cachability is 
> >> going
> >> +                     * to be.
> >>                       */
> >> -                    if ( hap_enabled(d) && cpu_has_vmx )
> >> -                case X86_MT_UC:
> >> -                        break;
> >> -                    /* fall through */
> >> -                default:
> >> -                    flush_all(FLUSH_CACHE);
> >> +                    if ( is_iommu_enabled(d) || cache_flush_permitted(d) )
> >> +                        flush = true;
> >>                      break;
> >>                  }
> >> -                return 0;
> >> +                rc = 0;
> >> +                goto finish;
> >>              }
> >>          domain_unlock(d);
> >>          return -ENOENT;
> >>  
> >>      case X86_MT_UCM:
> >>      case X86_MT_UC:
> >> -    case X86_MT_WB:
> >>      case X86_MT_WC:
> >> +        /* Flush since we don't know what the cachability was. */
> >> +        if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) )
> >> +            return -EPERM;
> >> +        flush = true;
> >> +        break;
> >> +
> >> +    case X86_MT_WB:
> >>      case X86_MT_WP:
> >>      case X86_MT_WT:
> >>          break;
> >> @@ -689,8 +694,12 @@ int hvm_set_mem_pinned_cacheattr(struct
> >>  
> >>      xfree(newr);
> >>  
> >> + finish:
> >>      p2m_memory_type_changed(d);
> >> -    if ( type != X86_MT_WB )
> >> +
> >> +    if ( flush && d->creation_finished &&
> >> +         (!boot_cpu_has(X86_FEATURE_XEN_SELFSNOOP) ||
> >> +          (is_iommu_enabled(d) && !iommu_snoop)) )
> >>          flush_all(FLUSH_CACHE);
> > 
> > I think it would be better if we could add those checks to
> > memory_type_changed() rather than open-coding them here, and just call
> > memory_type_changed() then, which would also avoid the goto AFAICT.
> 
> Hmm, with this last remark, what does "those checks" cover then?

I have a patches I was going to send today (done some overnight
testing) that do:

    if ( cache_flush_permitted(d) &&
         d->vcpu && d->vcpu[0] && p2m_memory_type_changed(d) &&
         /*
          * Do the p2m type-change, but skip the cache flush if the domain is
          * not yet running.  The check for creation_finished must strictly be
          * done after the call to p2m_memory_type_changed().
          */
         d->creation_finished &&
         /*
          * The cache flush should be done if either: CPU doesn't have
          * self-snoop in which case there could be aliases left in the cache,
          * or IOMMUs cannot force all DMA device accesses to be snooped.
          */
         (!boot_cpu_has(X86_FEATURE_XEN_SELFSNOOP) ||
          (is_iommu_enabled(d) && !iommu_snoop)) )
    {
        flush_all(FLUSH_CACHE);
    }

As to attempt to limit the cache flushing done in
memory_type_changed().

> I first
> read it as meaning the conditions in just this if(), but the "goto" is
> needed for a different reason.

Maybe I'm missing something, but couldn't
hvm_set_mem_pinned_cacheattr() just call memory_type_changed() (with
the proposed checks above added) if and return then, instead of doing
a goto?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.