[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen staging] x86/HVM: restrict use of pinned cache attributes as well as associated flushing
commit d63b2c60f7e8fd5801d2557812a81da71f18d490 Author: Jan Beulich <jbeulich@xxxxxxxx> AuthorDate: Wed Jun 18 09:25:51 2025 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Wed Jun 18 09:25:51 2025 +0200 x86/HVM: restrict use of pinned cache attributes as well as associated flushing We don't permit use of uncachable memory types elsewhere unless a domain meets certain criteria. Enforce this also during registration of pinned cache attribute ranges. Furthermore restrict cache flushing to just - registration of uncachable ranges, - de-registration of cachable ranges. While there, also (mainly by calling memory_type_changed()) - take CPU self-snoop as well as IOMMU snoop into account (albeit the latter still is a global property rather than a per-domain one), - avoid flushes when the domain isn't running yet (which ought to be the common case). Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- xen/arch/x86/hvm/mtrr.c | 36 +++++++++++++++++++++--------------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c index 99e3b4062b..e0bef8c82d 100644 --- a/xen/arch/x86/hvm/mtrr.c +++ b/xen/arch/x86/hvm/mtrr.c @@ -582,6 +582,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start, { struct hvm_mem_pinned_cacheattr_range *range, *newr; unsigned int nr = 0; + bool flush = false; int rc = 1; if ( !is_hvm_domain(d) ) @@ -605,31 +606,34 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start, type = range->type; call_rcu(&range->rcu, free_pinned_cacheattr_entry); - p2m_memory_type_changed(d); switch ( type ) { - case X86_MT_UCM: + case X86_MT_WB: + case X86_MT_WP: + case X86_MT_WT: /* - * For EPT we can also avoid the flush in this case; - * see epte_get_entry_emt(). + * Flush since we don't know what the cachability is going + * to be. */ - if ( hap_enabled(d) && cpu_has_vmx ) - case X86_MT_UC: - break; - /* fall through */ - default: - flush_all(FLUSH_CACHE_EVICT); + flush = true; break; } - return 0; + rc = 0; + goto finish; } domain_unlock(d); return -ENOENT; case X86_MT_UCM: case X86_MT_UC: - case X86_MT_WB: case X86_MT_WC: + if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) ) + return -EPERM; + /* Flush since we don't know what the cachability was. */ + flush = true; + break; + + case X86_MT_WB: case X86_MT_WP: case X86_MT_WT: break; @@ -682,9 +686,11 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start, xfree(newr); - p2m_memory_type_changed(d); - if ( type != X86_MT_WB ) - flush_all(FLUSH_CACHE_EVICT); + finish: + if ( flush ) + memory_type_changed(d); + else if ( d->vcpu && d->vcpu[0] ) + p2m_memory_type_changed(d); return rc; } -- generated by git-patchbot for /home/xen/git/xen.git#staging
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |