|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/HVM: restrict use of pinned cache attributes as well as associated flushing
On Wed, Mar 22, 2023 at 07:50:09AM +0100, Jan Beulich wrote:
> We don't permit use of uncachable memory types elsewhere unless a domain
> meets certain criteria. Enforce this also during registration of pinned
> cache attribute ranges.
>
> Furthermore restrict cache flushing to just uncachable range registration.
> While there, also
> - take CPU self-snoop as well as IOMMU snoop into account (albeit the
> latter still is a global property rather than a per-domain one),
> - avoid flushes when the domain isn't running yet (which ought to be the
> common case).
>
> Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> At the expense of yet larger a diff it would be possible to get away
> without any "goto", by moving the whole "new entry" handling into the
> switch(). Personally I'd prefer that, but the larger diff may be
> unwelcome.
>
> I have to admit that I can't spot what part of epte_get_entry_emt() the
> comment refers to that is being deleted. The function does use
> hvm_get_mem_pinned_cacheattr(), yes, but there's nothing there that talks
> about cache flushes (and their avoiding) in any way.
>
> Is it really sensible to add/remove ranges once the guest is already
> running? (If it is, limiting the scope of the flush would be nice, but
> would require knowing dirtyness for the domain wrt the caches, which
> currently we don't track.)
>
> This is kind of amending XSA-428.
>
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -589,6 +589,7 @@ int hvm_set_mem_pinned_cacheattr(struct
> {
> struct hvm_mem_pinned_cacheattr_range *range, *newr;
> unsigned int nr = 0;
> + bool flush = false;
> int rc = 1;
>
> if ( !is_hvm_domain(d) )
> @@ -612,31 +613,35 @@ int hvm_set_mem_pinned_cacheattr(struct
>
> type = range->type;
> call_rcu(&range->rcu, free_pinned_cacheattr_entry);
> - p2m_memory_type_changed(d);
> switch ( type )
> {
> - case X86_MT_UCM:
> + case X86_MT_WB:
> + case X86_MT_WP:
> + case X86_MT_WT:
> /*
> - * For EPT we can also avoid the flush in this case;
> - * see epte_get_entry_emt().
> + * Flush since we don't know what the cachability is
> going
> + * to be.
> */
> - if ( hap_enabled(d) && cpu_has_vmx )
> - case X86_MT_UC:
> - break;
> - /* fall through */
> - default:
> - flush_all(FLUSH_CACHE);
> + if ( is_iommu_enabled(d) || cache_flush_permitted(d) )
> + flush = true;
> break;
> }
> - return 0;
> + rc = 0;
> + goto finish;
> }
> domain_unlock(d);
> return -ENOENT;
>
> case X86_MT_UCM:
> case X86_MT_UC:
> - case X86_MT_WB:
> case X86_MT_WC:
> + /* Flush since we don't know what the cachability was. */
> + if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) )
> + return -EPERM;
> + flush = true;
> + break;
> +
> + case X86_MT_WB:
> case X86_MT_WP:
> case X86_MT_WT:
> break;
> @@ -689,8 +694,12 @@ int hvm_set_mem_pinned_cacheattr(struct
>
> xfree(newr);
>
> + finish:
> p2m_memory_type_changed(d);
> - if ( type != X86_MT_WB )
> +
> + if ( flush && d->creation_finished &&
> + (!boot_cpu_has(X86_FEATURE_XEN_SELFSNOOP) ||
> + (is_iommu_enabled(d) && !iommu_snoop)) )
> flush_all(FLUSH_CACHE);
I think it would be better if we could add those checks to
memory_type_changed() rather than open-coding them here, and just call
memory_type_changed() then, which would also avoid the goto AFAICT.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |