[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 1/3] x86/tlb: introduce a flush HVM ASIDs flag

On 20.03.2020 15:17, Julien Grall wrote:
> Hi,
> On 20/03/2020 13:19, Jan Beulich wrote:
>> On 20.03.2020 10:12, Julien Grall wrote:
>>> On 20/03/2020 09:01, Roger Pau Monné wrote:
>>>> On Fri, Mar 20, 2020 at 08:21:19AM +0100, Jan Beulich wrote:
>>>>> On 19.03.2020 20:07, Julien Grall wrote:
>>>>>> On 19/03/2020 18:43, Roger Pau Monné wrote:
>>>>>>> On Thu, Mar 19, 2020 at 06:07:44PM +0000, Julien Grall wrote:
>>>>>>>> On 19/03/2020 17:38, Roger Pau Monné wrote:
>>>>>>>>> On Thu, Mar 19, 2020 at 04:21:23PM +0000, Julien Grall wrote:
>>>>>>>>>     >> Why can't you keep flush_tlb_mask() here?
>>>>>>>>> Because filtered_flush_tlb_mask is used in populate_physmap, and
>>>>>>>>> changes to the phymap require an ASID flush on AMD hardware.
>>>>>>>> I am afraid this does not yet explain me why flush_tlb_mask() could 
>>>>>>>> not be
>>>>>>>> updated so it flush the ASID on AMD hardware.
>>>>>>> Current behavior previous to this patch is to flush the ASIDs on
>>>>>>> every TLB flush.
>>>>>>> flush_tlb_mask is too widely used on x86 in places where there's no
>>>>>>> need to flush the ASIDs. This prevents using assisted flushes (by L0)
>>>>>>> when running nested, since those assisted flushes performed by L0
>>>>>>> don't flush the L2 guests TLBs.
>>>>>>> I could keep current behavior and leave flush_tlb_mask also flushing the
>>>>>>> ASIDs, but that seems wrong as the function doesn't have anything in
>>>>>>> it's name that suggests it also flushes the in-guest TLBs for HVM.
>>>>>> I agree the name is confusing, I had to look at the implementation to 
>>>>>> understand what it does.
>>>>>> How about renaming (or introducing) the function to 
>>>>>> flush_tlb_all_guests_mask() or flush_tlb_all_guests_cpumask()) ?
>>>>> And this would then flush _only_ guest TLBs?
>>>> No, I think from Julien's proposal (if I understood it correctly)
>>>> flush_tlb_all_guests_mask would do what flush_tlb_mask currently does
>>>> previous to this patch (flush Xen's TLBs + HVM ASIDs).
>>> It looks like there might be confusion on what "guest TLBs" means. In my 
>>> view this means any TLBs associated directly or indirectly with the guest. 
>>> On Arm, this would be nuke:
>>>     - guest virtual address -> guest physical address TLB entry
>>>     - guest physical address -> host physical address TLB entry
>>>     - guest virtual address -> host physical address TLB entry
>>> I would assume you want something similar on x86, right?
>> I don't think we'd want the middle of the three items you list,
>> but I also don't see how this would be relevant here - flushing
>> that is a p2m operation, not one affecting in-guest translations.
> Apologies if this seems obvious to you, but why would you want to only flush 
> in-guest translations in common code? What are you trying to protect against?

I've not looked at the particular use in common code, my comment
was only about what's still in context above.

> At least on Arm, you don't know whether the TLBs contains split or combined 
> stage-2 (P2M) - stage-1 (guest PT) entries. So you have to nuke everything.

Flushing guest mappings (or giving the appearance to do so, by
switching ASID/PCID) is supposed to also flush combined
mappings of course. It is not supposed to flush p2m mappings,
because that's a different address space. It may well be that
in the place the function gets used flushing of everything is
needed, but then - as I think Roger has already said - the
p2m part of the flushing simply happens elsewhere on x86. (It
may well be that it could do with avoiding there and getting
done centrally from the common code invocation.)

> But this is already done as part of the P2M flush. I believe this should be 
> the same on x86.

A p2m flush would deal with p2m and combined mappings; it
still wouldn't flush guest linear -> guest phys ones.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.