[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall
>>> On 20.11.15 at 14:41, <Paul.Durrant@xxxxxxxxxx> wrote: >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> Sent: 20 November 2015 12:27 >> While you can't "and" that mask into input_params.vcpu_mask, >> wouldn't using it allow you to avoid the scratch pCPU mask >> variable? > > I'm not sure. After flushing the ASIDs I guess I could start with the > domain_dirty_cpumask and, remove non-running vcpus from it, and then use it > as > the flush mask. If I do that, I suppose I ought to reset the > vcpu_dirty_vcpumask values too. Something like... > > for_each_vcpu ( currd, v ) > { > if ( v->vcpu_id >= (sizeof(input_params.vcpu_mask) * 8) ) > break; > > if ( !((input_params.vcpu_mask >> v->vcpu_id) & 1) ) > continue; > > hvm_asid_flush_vcpu(v); > cpumask_clear(v->vcpu_dirty_vcpumask); > > if ( v->is_running ) > cpumask_set_cpu(v->processor, v->vcpu_dirty_vcpumask); > else > cpumask_clear_cpu(v->processor, d->domain_dirty_vcpumask); > } > > if ( !cpumask_empty(d->domain_dirty_vcpumask) ) > flush_tlb_mask(d->domain_dirty_vcpumask); For one I don't think you should be modifying either of the two dirty masks here - nothing outside of context switch code does or should: Both really ought to be redundant with what context switch code already does, plus the "clear" would even have the potential of suppressing further flushes if you race with context sitch code. And if you wanted to subtract the vCPU-s you did the ASID flush for from d->domain_dirty_cpumask, you'd again need a temporary mask, i.e. not much (if anything) won compared to v5. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |