[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] When flush tlb , we need consider the cpu_online_map
>>> "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> 29.03.10 14:00 >>> >When flush tlb mask, we need consider the cpu_online_map. The same happens to >ept flush also. While the idea is certainly correct, doing this more efficiently seems quite desirable to me, especially when NR_CPUS is large: >--- a/xen/arch/x86/hvm/vmx/vmx.c Sat Mar 27 16:01:35 2010 +0000 >+++ b/xen/arch/x86/hvm/vmx/vmx.c Mon Mar 29 17:49:51 2010 +0800 >@@ -1235,6 +1235,9 @@ void ept_sync_domain(struct domain *d) > * unnecessary extra flushes, to avoid allocating a cpumask_t on the > stack. > */ > d->arch.hvm_domain.vmx.ept_synced = d->domain_dirty_cpumask; >+ cpus_and(d->arch.hvm_domain.vmx.ept_synced, >+ d->arch.hvm_domain.vmx.ept_synced, >+ cpu_online_map); The added code can be combined with the pre-existing line: cpus_and(d->arch.hvm_domain.vmx.ept_synced, d->domain_dirty_cpumask, cpu_online_map); > on_selected_cpus(&d->arch.hvm_domain.vmx.ept_synced, > __ept_sync_domain, d, 1); > } >--- a/xen/arch/x86/smp.c Sat Mar 27 16:01:35 2010 +0000 >+++ b/xen/arch/x86/smp.c Mon Mar 29 17:47:25 2010 +0800 >@@ -229,6 +229,7 @@ void flush_area_mask(const cpumask_t *ma > { > spin_lock(&flush_lock); > cpus_andnot(flush_cpumask, *mask, *cpumask_of(smp_processor_id())); >+ cpus_and(flush_cpumask, cpu_online_map, flush_cpumask); Here, first doing the full-mask operation and then clearing the one extra bit is less overhead: cpus_and(flush_cpumask, *mask, cpu_online_map); cpu_clear(smp_processor_id(), flush_cpumask); > flush_va = va; > flush_flags = flags; > send_IPI_mask(&flush_cpumask, INVALIDATE_TLB_VECTOR); Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |