[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 4/7] x86/tlb: introduce a flush guests TLB flag

On 28.02.2020 17:50, Roger Pau Monné wrote:
> On Fri, Feb 28, 2020 at 05:14:05PM +0100, Jan Beulich wrote:
>> On 19.02.2020 18:43, Roger Pau Monne wrote:
>>> This was previously unconditionally done in each pre_flush call, but
>>> that's not required: HVM guests not using shadow don't require linear
>>> TLB flushes as Xen doesn't modify the guest page tables in that case
>>> (ie: when using HAP).
>> This explains the correctness in one direction. What about the
>> removal of this from the switch_cr3_cr4() path?
> AFAICT that's never used by shadow code to modify cr3 or cr4, and
> hence doesn't require a guest linear TLB flush.

XSA-294 tells me to be very conservative here. It is not necessarily
the direct use by shadow code that matters; toggle_guest_*() isn't
used directly by it, either.

>> And what about
>> our safety assumptions from the ticking of tlbflush_clock,
>> where we then imply that pages e.g. about to be freed can't
>> have any translations left in any TLBs anymore?
> I'm slightly confused. That flush only affects HVM guests linear TLB,
> and hence is not under Xen control unless shadow mode is used. Pages
> to be freed in the HAP case need to be flushed from the EPT/NPT, but
> whether there are references left in the guest TLB to point to that
> gfn really doesn't matter to Xen at all, since the guest is in full
> control of it's MMU and TLB in that case.

Ah yes, sorry, I probably didn't get my thinking right around
combined mappings and when they get invalidated.

>>> --- a/xen/include/asm-x86/flushtlb.h
>>> +++ b/xen/include/asm-x86/flushtlb.h
>>> @@ -105,6 +105,8 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long 
>>> cr4);
>>>  #define FLUSH_VCPU_STATE 0x1000
>>>   /* Flush the per-cpu root page table */
>>>  #define FLUSH_ROOT_PGTBL 0x2000
>>> + /* Flush all HVM guests linear TLB (using ASID/VPID) */
>>> +#define FLUSH_GUESTS_TLB 0x4000
>> For one, the "all" is pretty misleading. A single such request
>> doesn't do this for all vCPU-s of all HVM guests, does it?
> It kind of does because it tickles the pCPU ASID/VPID generation ID,
> so any vCPU scheduled on the selected pCPUs will get a new ASID/VPID
> allocated and thus a clean TLB.
>> I'm
>> also struggling with the 'S' in "GUESTS" - why is it not just
>> "GUEST"? 
> Any guest vCPUs running on the selected pCPUs will get a new ASID/VPID
> ID and thus a clean TLB.

Right, I see. Yet ...

>> I admit the names of the involved functions
>> (hvm_flush_guest_tlbs(), hvm_asid_flush_core()) are somewhat
>> misleading, as they don't actually do any flushing, they merely
>> arrange for what is in the TLB to no longer be able to be used,
>> so giving this a suitable name is "historically" complicated.
>> What if we did away with the hvm_flush_guest_tlbs() wrapper,
>> naming the constant here then after hvm_asid_flush_core(), e.g.
> I'm not opposed to renaming. The comment before the definition was
> also meant to clarify it's usage, and hence the explicit mention of

... there's also one more argument for renaming: The present
name doesn't convey at all that this operation is HVM-only
(i.e. PV guests wouldn't have their TLBs [as far as one can
call them "their"] flushed).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.