|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v9 1/3] x86/tlb: introduce a flush HVM ASIDs flag
On 14.04.2020 13:19, Roger Pau Monné wrote:
> On Tue, Apr 14, 2020 at 12:13:04PM +0200, Jan Beulich wrote:
>> On 14.04.2020 12:02, Roger Pau Monné wrote:
>>> That seems nice, we would have to be careful however as reducing the
>>> number of ASID/VPID flushes could uncover issues in the existing code.
>>> I guess you mean something like:
>>>
>>> static inline void guest_flush_tlb_mask(const struct domain *d,
>>> const cpumask_t *mask)
>>> {
>>> flush_mask(mask, (is_pv_domain(d) || shadow_mode_enabled(d) ? FLUSH_TLB
>>> : 0) |
>>> (is_hvm_domain(d) && cpu_has_svm ?
>>> FLUSH_HVM_ASID_CORE
>>> : 0));
>>> }
>>
>> Almost - is_hvm_domain(d) && cpu_has_svm seems to wide for me. I'd
>> rather use hap_enabled() && cpu_has_svm, which effectively means NPT.
>> Or am I overlooking a need to do ASID flushes also in shadow mode?
>
> I think so, I've used is_hvm_domain in order to cover for HVM domains
> running in shadow mode on AMD hardware, I think those also need the
> ASID flushes.
I'm unconvinced: The entire section "TLB Management" in the PM gives
the impression that "ordinary" TLB flushing covers all ASIDs anyway.
It's not stated anywhere (I could find) explicitly though.
>> Also I'd suggest to calculate the flags up front, to avoid calling
>> flush_mask() in the first place in case (EPT) neither bit is set.
>>
>>> I think this should work, but I would rather do it in a separate
>>> patch.
>>
>> Yes, just like the originally (wrongly, as you validly say) suggested
>> full removal of them, putting this in a separate patch would indeed
>> seem better.
>
> Would you like me to resend with the requested fix to
> paging_log_dirty_range (ie: drop the FLUSH_TLB and only call
> flush_mask for HAP guests running on AMD) then?
Well, ideally I'd see that function also make use of the intended
new helper function, if at all possible (and suitable).
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |