[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2] x86/HVM: restrict use of pinned cache attributes as well as associated flushing
On Wed, Jun 04, 2025 at 11:48:00AM +0200, Jan Beulich wrote: > We don't permit use of uncachable memory types elsewhere unless a domain > meets certain criteria. Enforce this also during registration of pinned > cache attribute ranges. > > Furthermore restrict cache flushing to just uncachable range registration. > While there, also (mainly be calling memory_type_changed()) > - take CPU self-snoop as well as IOMMU snoop into account (albeit the > latter still is a global property rather than a per-domain one), > - avoid flushes when the domain isn't running yet (which ought to be the > common case). > > Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > --- > At the expense of yet larger a diff it would be possible to get away > without any "goto", by moving the whole "new entry" handling into the > switch(). Personally I'd prefer that, but the larger diff may be > unwelcome. > > I have to admit that I can't spot what part of epte_get_entry_emt() the > comment refers to that is being deleted. The function does use > hvm_get_mem_pinned_cacheattr(), yes, but there's nothing there that talks > about cache flushes (and their avoiding) in any way. > > Is it really sensible to add/remove ranges once the guest is already > running? (If it is, limiting the scope of the flush would be nice, but > would require knowing dirtyness for the domain wrt the caches, which > currently we don't track.) > > This is kind of amending XSA-428. > --- > v2: Use memory_type_changed() and conditionalize call to > p2m_memory_type_changed(). > > --- a/xen/arch/x86/hvm/mtrr.c > +++ b/xen/arch/x86/hvm/mtrr.c > @@ -582,6 +582,7 @@ int hvm_set_mem_pinned_cacheattr(struct > { > struct hvm_mem_pinned_cacheattr_range *range, *newr; > unsigned int nr = 0; > + bool flush = false; > int rc = 1; > > if ( !is_hvm_domain(d) ) > @@ -605,31 +606,35 @@ int hvm_set_mem_pinned_cacheattr(struct > > type = range->type; > call_rcu(&range->rcu, free_pinned_cacheattr_entry); > - p2m_memory_type_changed(d); > switch ( type ) > { > - case X86_MT_UCM: > + case X86_MT_WB: > + case X86_MT_WP: > + case X86_MT_WT: > /* > - * For EPT we can also avoid the flush in this case; > - * see epte_get_entry_emt(). > + * Flush since we don't know what the cachability is > going > + * to be. > */ > - if ( hap_enabled(d) && cpu_has_vmx ) > - case X86_MT_UC: > - break; > - /* fall through */ > - default: > - flush_all(FLUSH_CACHE); > + if ( is_iommu_enabled(d) || cache_flush_permitted(d) ) > + flush = true; Is the check here required? memory_type_changed() will already check for is_iommu_enabled() and cache_flush_permitted(), and hence you could just set flush to true unconditionally here IMO. > break; > } > - return 0; > + rc = 0; > + goto finish; > } > domain_unlock(d); > return -ENOENT; > > case X86_MT_UCM: > case X86_MT_UC: > - case X86_MT_WB: > case X86_MT_WC: > + /* Flush since we don't know what the cachability was. */ > + if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) ) > + return -EPERM; > + flush = true; > + break; > + > + case X86_MT_WB: > case X86_MT_WP: > case X86_MT_WT: > break; > @@ -682,9 +687,11 @@ int hvm_set_mem_pinned_cacheattr(struct > > xfree(newr); > > - p2m_memory_type_changed(d); > - if ( type != X86_MT_WB ) > - flush_all(FLUSH_CACHE); > + finish: > + if ( flush ) > + memory_type_changed(d); > + else if ( d->vcpu && d->vcpu[0] ) > + p2m_memory_type_changed(d); FWIW, I would just call memory_type_changed() unconditionally regardless of the change. We suspect the hypercall is only used at domain creation time (where memory_type_changed() won't do a cache flush anyway). Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |