[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] x86/EPT: flush cache when (potentially) limiting cachability
While generally such guest side changes ought to be followed by guest initiated flushes, we're flushing the cache under similar conditions elsewhere (e.g. when the guest sets CR0.CD), so let's do so here too. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- Note that this goes on top of the still pending series titled "x86/EPT: miscellaneous further fixes to EMT determination" (see http://lists.xenproject.org/archives/html/xen-devel/2014-04/msg02932.html). It would need to be determined whether we should gate all of these flushes on need_iommu() and/or cache_flush_permitted(). Otoh the changes they hang off of are infrequent, so there's no severe performance penalty. --- a/xen/arch/x86/hvm/mtrr.c +++ b/xen/arch/x86/hvm/mtrr.c @@ -649,8 +649,11 @@ int32_t hvm_set_mem_pinned_cacheattr( { rcu_read_unlock(&pinned_cacheattr_rcu_lock); list_del_rcu(&range->list); + type = range->type; call_rcu(&range->rcu, free_pinned_cacheattr_entry); p2m_memory_type_changed(d); + if ( type != PAT_TYPE_UNCACHABLE ) + flush_all(FLUSH_CACHE); return 0; } rcu_read_unlock(&pinned_cacheattr_rcu_lock); @@ -697,6 +700,8 @@ int32_t hvm_set_mem_pinned_cacheattr( list_add_rcu(&range->list, &d->arch.hvm_domain.pinned_cacheattr_ranges); p2m_memory_type_changed(d); + if ( type != PAT_TYPE_WRBACK ) + flush_all(FLUSH_CACHE); return 0; } @@ -786,7 +791,10 @@ HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save void memory_type_changed(struct domain *d) { if ( iommu_enabled && d->vcpu && d->vcpu[0] ) + { p2m_memory_type_changed(d); + flush_all(FLUSH_CACHE); + } } int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn, Attachment:
EPT-flush-cache.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |